Dec 08 17:40:34 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 17:40:34 crc kubenswrapper[5113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:34 crc kubenswrapper[5113]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 17:40:34 crc kubenswrapper[5113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:34 crc kubenswrapper[5113]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:34 crc kubenswrapper[5113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 17:40:34 crc kubenswrapper[5113]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.497175 5113 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499695 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499711 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499715 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499719 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499729 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499733 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499737 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499742 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499746 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499753 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499757 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499761 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499765 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499769 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499772 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499776 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499779 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499783 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499786 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499790 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499793 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499797 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499800 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499804 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499807 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499810 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499814 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499817 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499820 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499824 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499827 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499830 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499833 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499836 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499840 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499844 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499847 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499851 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499854 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499858 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499862 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499867 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499872 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499876 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499879 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499883 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499887 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499891 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499894 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499897 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499901 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499904 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499908 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499912 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499917 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499922 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499926 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499930 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499933 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499937 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499940 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499944 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499947 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499951 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499954 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499959 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499962 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499965 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499968 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499971 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499975 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499978 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499983 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499987 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499990 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499993 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.499997 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500000 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500003 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500007 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500012 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500015 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500019 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500022 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500025 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500029 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500472 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500482 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500485 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500488 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500491 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500495 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500499 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500502 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500506 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500512 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500520 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500526 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500530 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500535 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500539 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500544 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500548 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500552 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500557 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500560 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500564 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500569 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500572 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500576 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500580 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500583 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500587 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500593 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500596 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500600 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500603 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500606 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500609 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500613 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500617 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500620 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500624 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500627 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500631 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500635 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500638 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500641 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500645 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500648 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500651 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500654 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500658 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500661 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500665 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500669 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500672 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500676 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500681 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500685 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500690 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500694 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500698 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500701 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500705 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500708 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500711 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500714 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500717 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500721 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500724 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500727 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500730 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500736 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500740 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500743 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500746 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500749 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500753 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500756 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500759 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500762 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500766 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500769 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500772 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500775 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500778 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500781 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500785 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500788 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500791 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.500795 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501022 5113 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501051 5113 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501061 5113 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501072 5113 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501078 5113 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501082 5113 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501088 5113 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501094 5113 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501099 5113 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501102 5113 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501106 5113 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501110 5113 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501115 5113 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501119 5113 flags.go:64] FLAG: --cgroup-root="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501124 5113 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501128 5113 flags.go:64] FLAG: --client-ca-file="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501132 5113 flags.go:64] FLAG: --cloud-config="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501135 5113 flags.go:64] FLAG: --cloud-provider="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501138 5113 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501144 5113 flags.go:64] FLAG: --cluster-domain="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501147 5113 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501152 5113 flags.go:64] FLAG: --config-dir="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501156 5113 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501160 5113 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501165 5113 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501169 5113 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501174 5113 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501181 5113 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501185 5113 flags.go:64] FLAG: --contention-profiling="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501189 5113 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501193 5113 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501196 5113 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501200 5113 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501206 5113 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501210 5113 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501213 5113 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501217 5113 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501221 5113 flags.go:64] FLAG: --enable-server="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501225 5113 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501230 5113 flags.go:64] FLAG: --event-burst="100" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501234 5113 flags.go:64] FLAG: --event-qps="50" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501237 5113 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501241 5113 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501244 5113 flags.go:64] FLAG: --eviction-hard="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501249 5113 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501254 5113 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501257 5113 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501262 5113 flags.go:64] FLAG: --eviction-soft="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501265 5113 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501269 5113 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501273 5113 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501277 5113 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501280 5113 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501284 5113 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501288 5113 flags.go:64] FLAG: --feature-gates="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501293 5113 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501297 5113 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501300 5113 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501304 5113 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501309 5113 flags.go:64] FLAG: --healthz-port="10248" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501313 5113 flags.go:64] FLAG: --help="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501316 5113 flags.go:64] FLAG: --hostname-override="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501320 5113 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501324 5113 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501327 5113 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501331 5113 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501335 5113 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501338 5113 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501341 5113 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501345 5113 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501348 5113 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501353 5113 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501358 5113 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501363 5113 flags.go:64] FLAG: --kube-reserved="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501367 5113 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501374 5113 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501379 5113 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501383 5113 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501386 5113 flags.go:64] FLAG: --lock-file="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501390 5113 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501394 5113 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501398 5113 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501405 5113 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501409 5113 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501413 5113 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501416 5113 flags.go:64] FLAG: --logging-format="text" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501420 5113 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501425 5113 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501429 5113 flags.go:64] FLAG: --manifest-url="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501432 5113 flags.go:64] FLAG: --manifest-url-header="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501438 5113 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501443 5113 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501450 5113 flags.go:64] FLAG: --max-pods="110" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501453 5113 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501458 5113 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501461 5113 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501465 5113 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501469 5113 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501473 5113 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501477 5113 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501487 5113 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501491 5113 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501495 5113 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501500 5113 flags.go:64] FLAG: --pod-cidr="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501504 5113 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501513 5113 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501518 5113 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501524 5113 flags.go:64] FLAG: --pods-per-core="0" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501529 5113 flags.go:64] FLAG: --port="10250" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501534 5113 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501538 5113 flags.go:64] FLAG: --provider-id="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501543 5113 flags.go:64] FLAG: --qos-reserved="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501548 5113 flags.go:64] FLAG: --read-only-port="10255" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501554 5113 flags.go:64] FLAG: --register-node="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501559 5113 flags.go:64] FLAG: --register-schedulable="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501564 5113 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501573 5113 flags.go:64] FLAG: --registry-burst="10" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501578 5113 flags.go:64] FLAG: --registry-qps="5" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501582 5113 flags.go:64] FLAG: --reserved-cpus="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501585 5113 flags.go:64] FLAG: --reserved-memory="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501590 5113 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501594 5113 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501598 5113 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501602 5113 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501606 5113 flags.go:64] FLAG: --runonce="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501610 5113 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501614 5113 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501619 5113 flags.go:64] FLAG: --seccomp-default="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501622 5113 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501626 5113 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501630 5113 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501634 5113 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501638 5113 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501642 5113 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501646 5113 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501650 5113 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501653 5113 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501658 5113 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501661 5113 flags.go:64] FLAG: --system-cgroups="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501666 5113 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501672 5113 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501675 5113 flags.go:64] FLAG: --tls-cert-file="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501679 5113 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501683 5113 flags.go:64] FLAG: --tls-min-version="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501686 5113 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501690 5113 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501695 5113 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501699 5113 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501702 5113 flags.go:64] FLAG: --v="2" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501709 5113 flags.go:64] FLAG: --version="false" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501714 5113 flags.go:64] FLAG: --vmodule="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501720 5113 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.501724 5113 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501826 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501832 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501836 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501840 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501844 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501848 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501852 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501856 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501860 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501863 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501867 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501871 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501874 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501878 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501881 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501885 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501889 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501893 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501896 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501900 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501903 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501906 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501909 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501912 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501915 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501919 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501924 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501927 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501930 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501934 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501937 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501940 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501943 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501946 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501950 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501954 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501957 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501961 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501964 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501968 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501971 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501975 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501978 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501982 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501987 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501991 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501995 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.501999 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502004 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502008 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502011 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502015 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502019 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502022 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502026 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502032 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502048 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502052 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502055 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502060 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502063 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502067 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502070 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502074 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502081 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502084 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502088 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502092 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502095 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502099 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502102 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502105 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502108 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502111 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502115 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502118 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502122 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502125 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502128 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502131 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502134 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502138 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502141 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502144 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502147 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.502150 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.502300 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.518671 5113 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.518721 5113 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518804 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518818 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518825 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518832 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518839 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518845 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518850 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518856 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518862 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518867 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518873 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518878 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518883 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518888 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518893 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518899 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518905 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518911 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518916 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518922 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518927 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518932 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518937 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518942 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518946 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518954 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518959 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518964 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518970 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518975 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518979 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518984 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518989 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518994 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.518999 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519004 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519009 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519014 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519019 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519024 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519028 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519054 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519060 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519066 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519073 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519079 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519085 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519090 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519095 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519100 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519106 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519113 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519121 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519126 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519132 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519137 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519143 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519148 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519152 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519158 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519163 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519167 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519172 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519177 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519181 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519186 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519191 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519195 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519200 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519205 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519210 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519215 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519219 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519224 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519229 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519270 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519279 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519284 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519289 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519293 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519298 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519303 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519310 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519316 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519321 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519326 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.519336 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519494 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519504 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519509 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519515 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519520 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519525 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519530 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519535 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519540 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519546 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519551 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519555 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519560 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519565 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519570 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519575 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519580 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519584 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519589 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519595 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519600 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519605 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519610 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519616 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519623 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519627 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519632 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519638 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519642 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519648 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519653 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519658 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519663 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519668 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519673 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519678 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519683 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519688 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519692 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519697 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519703 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519708 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519714 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519719 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519754 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519761 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519766 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519772 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519777 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519783 5113 feature_gate.go:328] unrecognized feature gate: Example Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519788 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519793 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519798 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519803 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519808 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519816 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519820 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519825 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519830 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519835 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519841 5113 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519845 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519850 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519857 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519863 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519869 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519874 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519880 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519885 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519891 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519896 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519902 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519909 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519915 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519921 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519926 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519932 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519937 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519942 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519947 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519952 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519958 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519963 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519969 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519974 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 17:40:34 crc kubenswrapper[5113]: W1208 17:40:34.519979 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.519987 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.520475 5113 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.523872 5113 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.527053 5113 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.527195 5113 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.527907 5113 server.go:1019] "Starting client certificate rotation" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.528091 5113 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.528176 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.534147 5113 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.535804 5113 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.536563 5113 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.543829 5113 log.go:25] "Validated CRI v1 runtime API" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.569133 5113 log.go:25] "Validated CRI v1 image API" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.570932 5113 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.575016 5113 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-17-34-44-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.575085 5113 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.591798 5113 manager.go:217] Machine: {Timestamp:2025-12-08 17:40:34.590148406 +0000 UTC m=+0.305941532 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649922048 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:763bf7f3-a73d-446d-8674-09d6015bdd0a BootID:80c2bcad-2593-4a10-ab9b-2aa8b813a421 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824958976 Type:vfs Inodes:4107656 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:4107656 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:23:6e:b6 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:23:6e:b6 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8f:d2:d8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:7f:85:f9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a2:70:2f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:94:ba:17 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:22:a5:53:5b:35:1a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1e:94:79:f8:57:1b Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649922048 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.592154 5113 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.592381 5113 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.593580 5113 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.593656 5113 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.593920 5113 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.593934 5113 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.593960 5113 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.594183 5113 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.594706 5113 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.594895 5113 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.595480 5113 kubelet.go:491] "Attempting to sync node with API server" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.595511 5113 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.595532 5113 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.595548 5113 kubelet.go:397] "Adding apiserver pod source" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.595568 5113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.597369 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.597438 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.597500 5113 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.597514 5113 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.598555 5113 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.598573 5113 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.600349 5113 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.600694 5113 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601259 5113 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601696 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601721 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601730 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601739 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601746 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601754 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601763 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601770 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601782 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601798 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601814 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.601924 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.602113 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.602129 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.602596 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.613341 5113 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.613440 5113 server.go:1295] "Started kubelet" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.613632 5113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.613704 5113 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.613796 5113 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.614448 5113 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 17:40:34 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.623155 5113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.623686 5113 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.623064 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f4e446518bd25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.613386533 +0000 UTC m=+0.329179649,LastTimestamp:2025-12-08 17:40:34.613386533 +0000 UTC m=+0.329179649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.626987 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.628765 5113 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.628811 5113 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.628766 5113 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.623167 5113 server.go:317] "Adding debug handlers to kubelet server" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.629586 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.630145 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.636511 5113 factory.go:55] Registering systemd factory Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.636554 5113 factory.go:223] Registration of the systemd container factory successfully Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.636838 5113 factory.go:153] Registering CRI-O factory Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.636849 5113 factory.go:223] Registration of the crio container factory successfully Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.636920 5113 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.636941 5113 factory.go:103] Registering Raw factory Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.636957 5113 manager.go:1196] Started watching for new ooms in manager Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.637459 5113 manager.go:319] Starting recovery of all containers Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.664792 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665257 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665274 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665286 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665301 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665312 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665326 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665338 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665351 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665365 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665400 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665418 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665432 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665445 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665459 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665471 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665491 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665503 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665516 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665526 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665555 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665566 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665578 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665594 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665606 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665616 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665642 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665651 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665664 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665673 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665682 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665690 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665699 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665707 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665716 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665724 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665733 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665742 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665750 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665760 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665768 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665780 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665790 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665800 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665809 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665818 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665828 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.665839 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666405 5113 manager.go:324] Recovery completed Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666596 5113 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666623 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666640 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666654 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666665 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666677 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666691 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666704 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666717 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666734 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666745 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666756 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666783 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666793 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666803 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666813 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666823 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666833 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666845 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666861 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666873 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666883 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666894 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666905 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666916 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666928 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666938 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666948 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666958 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666969 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666980 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.666990 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667000 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667011 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667021 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667050 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667063 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667076 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667086 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667132 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667144 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667153 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667171 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667182 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667194 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667205 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667215 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667223 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667231 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667239 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667248 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667257 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667266 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667275 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667285 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667296 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667305 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667315 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667325 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667334 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667344 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667354 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667365 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667374 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667384 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667407 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667509 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667521 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667532 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667567 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667577 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667587 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667597 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667607 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667617 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667626 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667636 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667647 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667656 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667665 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667673 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667683 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667695 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667703 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667712 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667720 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667729 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667738 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667804 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667814 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667823 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667832 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667841 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667850 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667858 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667867 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667875 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667887 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667894 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667903 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667911 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667919 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667928 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667936 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667973 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667983 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.667995 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668005 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668013 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668022 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668030 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668066 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668079 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668091 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668103 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668113 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668123 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668132 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668141 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668153 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668164 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668181 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668193 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668312 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668328 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668339 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668350 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668359 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668369 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668381 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668395 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668407 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668420 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668460 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668479 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668493 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668505 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668519 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668531 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668544 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668558 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668570 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668583 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668596 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668608 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668620 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668635 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668646 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668657 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668668 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668679 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668692 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668702 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668714 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668725 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668738 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668749 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668759 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668771 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668783 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668796 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668809 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668823 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668836 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668847 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668859 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668873 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668888 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668900 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668912 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668930 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668943 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668956 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668970 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668984 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.668998 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669009 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669084 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669101 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669115 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669126 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669138 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669150 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669163 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669174 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669186 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669197 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669209 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669220 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669234 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669246 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669258 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669269 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669280 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669291 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669301 5113 reconstruct.go:97] "Volume reconstruction finished" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.669309 5113 reconciler.go:26] "Reconciler: start to sync state" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.677012 5113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.678682 5113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.678727 5113 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.678761 5113 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.678775 5113 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.678837 5113 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.681616 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.687632 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.689698 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.689748 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.689761 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.690787 5113 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.690806 5113 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.690843 5113 state_mem.go:36] "Initialized new in-memory state store" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.694941 5113 policy_none.go:49] "None policy: Start" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.694969 5113 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.694984 5113 state_mem.go:35] "Initializing new in-memory state store" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.727769 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.736993 5113 manager.go:341] "Starting Device Plugin manager" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.737065 5113 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.737081 5113 server.go:85] "Starting device plugin registration server" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.737616 5113 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.737637 5113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.737915 5113 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.738068 5113 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.738084 5113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.741974 5113 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.742045 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.779520 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.779723 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.780737 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.780774 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.780785 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.781502 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.781773 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.781832 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.782218 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.782245 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.782258 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.782635 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.782666 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.782680 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.782917 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783052 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783085 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783564 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783595 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783606 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783819 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783842 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.783858 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.784436 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.784702 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.784813 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.784840 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.784843 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.784996 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.785550 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.785615 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.785653 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.785665 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.785799 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.785841 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786106 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786139 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786151 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786455 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786506 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786858 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.786887 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.787393 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.787414 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.787425 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.812250 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f4e446518bd25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.613386533 +0000 UTC m=+0.329179649,LastTimestamp:2025-12-08 17:40:34.613386533 +0000 UTC m=+0.329179649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.813995 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.831334 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.832379 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.843394 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.844410 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.844451 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.844465 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.844496 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.845067 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.850691 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.859772 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:34 crc kubenswrapper[5113]: E1208 17:40:34.864522 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.872907 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.872968 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873018 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873058 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873082 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873100 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873118 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873135 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873153 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873507 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873586 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873626 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873665 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873684 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873699 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873714 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873729 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873731 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873745 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873772 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873794 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873843 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873856 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873901 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873933 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.873953 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.874000 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.874016 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.874133 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.874291 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975242 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975317 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975343 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975361 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975375 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975391 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975407 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975424 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975431 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975512 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975551 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975581 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975570 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975634 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975617 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975664 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975672 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975615 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975790 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975855 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975883 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975879 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975925 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975953 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975979 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975998 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.976011 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975926 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.975956 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.976063 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.976025 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:34 crc kubenswrapper[5113]: I1208 17:40:34.976086 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.046260 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.047868 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.047912 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.047941 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.047971 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:35 crc kubenswrapper[5113]: E1208 17:40:35.048640 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.114917 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.133656 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 17:40:35 crc kubenswrapper[5113]: W1208 17:40:35.140286 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-eceae2675cf0621bddf0ec95d1a42ef1bb17f0943640468da34e0735965112a8 WatchSource:0}: Error finding container eceae2675cf0621bddf0ec95d1a42ef1bb17f0943640468da34e0735965112a8: Status 404 returned error can't find the container with id eceae2675cf0621bddf0ec95d1a42ef1bb17f0943640468da34e0735965112a8 Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.146071 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.151771 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:35 crc kubenswrapper[5113]: W1208 17:40:35.154542 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-0943dffa3d1c9d9f698667c72deb291df7c88038e29c19ccbd5b7d64437f7b9e WatchSource:0}: Error finding container 0943dffa3d1c9d9f698667c72deb291df7c88038e29c19ccbd5b7d64437f7b9e: Status 404 returned error can't find the container with id 0943dffa3d1c9d9f698667c72deb291df7c88038e29c19ccbd5b7d64437f7b9e Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.160622 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.165469 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:35 crc kubenswrapper[5113]: W1208 17:40:35.173600 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-d545abfca7f5f4b7c6cf4448f7efa7cfeaeb51f2a5b23c5ecb3d71edfcd2a69b WatchSource:0}: Error finding container d545abfca7f5f4b7c6cf4448f7efa7cfeaeb51f2a5b23c5ecb3d71edfcd2a69b: Status 404 returned error can't find the container with id d545abfca7f5f4b7c6cf4448f7efa7cfeaeb51f2a5b23c5ecb3d71edfcd2a69b Dec 08 17:40:35 crc kubenswrapper[5113]: W1208 17:40:35.179377 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-ddac3e9d5c8dd6cef1d88653ff93a393a90456ed550625fb89cbda23a2efd2da WatchSource:0}: Error finding container ddac3e9d5c8dd6cef1d88653ff93a393a90456ed550625fb89cbda23a2efd2da: Status 404 returned error can't find the container with id ddac3e9d5c8dd6cef1d88653ff93a393a90456ed550625fb89cbda23a2efd2da Dec 08 17:40:35 crc kubenswrapper[5113]: W1208 17:40:35.184151 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-4050e30e414fd9cf9dfef8529bfcc18c01f33de1cd055e5a7999dba0dfe27791 WatchSource:0}: Error finding container 4050e30e414fd9cf9dfef8529bfcc18c01f33de1cd055e5a7999dba0dfe27791: Status 404 returned error can't find the container with id 4050e30e414fd9cf9dfef8529bfcc18c01f33de1cd055e5a7999dba0dfe27791 Dec 08 17:40:35 crc kubenswrapper[5113]: E1208 17:40:35.232642 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.449785 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.453474 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.453551 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.453570 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.453624 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:35 crc kubenswrapper[5113]: E1208 17:40:35.454305 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.604129 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Dec 08 17:40:35 crc kubenswrapper[5113]: E1208 17:40:35.608844 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.686112 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4050e30e414fd9cf9dfef8529bfcc18c01f33de1cd055e5a7999dba0dfe27791"} Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.687512 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ddac3e9d5c8dd6cef1d88653ff93a393a90456ed550625fb89cbda23a2efd2da"} Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.688388 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d545abfca7f5f4b7c6cf4448f7efa7cfeaeb51f2a5b23c5ecb3d71edfcd2a69b"} Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.689837 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0943dffa3d1c9d9f698667c72deb291df7c88038e29c19ccbd5b7d64437f7b9e"} Dec 08 17:40:35 crc kubenswrapper[5113]: I1208 17:40:35.690972 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"eceae2675cf0621bddf0ec95d1a42ef1bb17f0943640468da34e0735965112a8"} Dec 08 17:40:35 crc kubenswrapper[5113]: E1208 17:40:35.696200 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:35 crc kubenswrapper[5113]: E1208 17:40:35.828154 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.035223 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.172241 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.254906 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.256607 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.256675 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.256689 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.256723 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.257446 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.603796 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.695336 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.697589 5113 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.699241 5113 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542" exitCode=0 Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.699415 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.699539 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.700009 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.700062 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.700075 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.701677 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.713075 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"01f1601b1902e4cd2d97dad1feb305cf7fabfb3963c75af1790b8a70b3f36673"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.713148 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9a8565c2d48a5158ee81ea32bf94e1fa5918bd8ef77a2f4f13837dfdac8e5bc5"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.713166 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"df4ea2631d89aee7ea27154e97131c2deb0604c986234fa00b81adc1f68380f0"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.713180 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.715171 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.715936 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.715993 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.716007 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.716372 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.717336 5113 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9" exitCode=0 Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.717430 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.717455 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.718096 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.718141 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.718159 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.718370 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.718747 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2" exitCode=0 Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.718808 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.718898 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.720799 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.720833 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.720844 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.721012 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.722330 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651" exitCode=0 Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.722358 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651"} Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.722488 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.722998 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.723015 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.723024 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:36 crc kubenswrapper[5113]: E1208 17:40:36.723236 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.725155 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.725754 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.725778 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:36 crc kubenswrapper[5113]: I1208 17:40:36.725790 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.277582 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.604089 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.605482 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.636543 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.736465 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ff208ec35eb72507f5ed9a811469dc40ecc4ab248f6b69e4b2875dbc4c2001b1"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.736535 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"99c545c56fb91bdf227d9980b73132ba07deeb048b298061085b5ebee0385451"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.736552 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"6553617f01079f332992f16cd1a257b9e090879e7f3081be7900e1e7d2ed55a8"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.736601 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.738742 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.738776 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.738787 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.739029 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.746880 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.746950 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.746966 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.757405 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451" exitCode=0 Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.757528 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.757717 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.760943 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"5743acdcc3be9e6004ceda4b55d50dd3f70a0f644add23d30c9f195736b2f15c"} Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.761095 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.761179 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.761845 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.761881 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.761893 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.762160 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.762779 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.762796 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.762806 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.762943 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.763195 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.763208 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.763218 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.763473 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.858185 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.859434 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.859483 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.859495 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:37 crc kubenswrapper[5113]: I1208 17:40:37.859536 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:37 crc kubenswrapper[5113]: E1208 17:40:37.860026 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.684647 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.765751 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c8a5bc6d4e596518d9c0550a369d36666642b52fd92ec859ccb3886a8c5c9f92"} Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.766309 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad"} Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.766537 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.768515 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.768568 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.768580 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:38 crc kubenswrapper[5113]: E1208 17:40:38.768833 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.771605 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8" exitCode=0 Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.771777 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.772102 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.772229 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8"} Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.772325 5113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.772350 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.772571 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.774428 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.774451 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.774461 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:38 crc kubenswrapper[5113]: E1208 17:40:38.774726 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775050 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775070 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775079 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:38 crc kubenswrapper[5113]: E1208 17:40:38.775265 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775579 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775600 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775611 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:38 crc kubenswrapper[5113]: E1208 17:40:38.775759 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775932 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775951 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:38 crc kubenswrapper[5113]: I1208 17:40:38.775962 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:38 crc kubenswrapper[5113]: E1208 17:40:38.776102 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.780783 5113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.781359 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.780973 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"32d2671acebdb7c9bf493978733f31bdc688b2c39538d6accbde1f8acb545ef4"} Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.782338 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2e1ec5e3621120e1d45d214b07ea9461d74b8876f2ecb753c9cb64edceb6e9dd"} Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.782390 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"897669035e32774ca5030c245e526d1f4a891d11bf807b707598ca43dba686f8"} Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.782991 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.783051 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.783068 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:39 crc kubenswrapper[5113]: E1208 17:40:39.783469 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:39 crc kubenswrapper[5113]: I1208 17:40:39.895385 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.094194 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.787600 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.788000 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.788153 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d08d6cf52478608ef265a49b4a56ce194ac8e56196751c94c2a0d8811c6fd23a"} Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.788188 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a976ae78e4000d634904dacd9850a4ef4b1a8f8466096b6d6a1a81bb1509d028"} Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.788529 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.788562 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.788572 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:40 crc kubenswrapper[5113]: E1208 17:40:40.788932 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.789550 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.789574 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:40 crc kubenswrapper[5113]: I1208 17:40:40.789585 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:40 crc kubenswrapper[5113]: E1208 17:40:40.789745 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.019168 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.060968 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.062770 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.062850 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.062870 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.062912 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.684777 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.684929 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.790290 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.790346 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.790993 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.791058 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.791072 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:41 crc kubenswrapper[5113]: E1208 17:40:41.791483 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.791932 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.791977 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:41 crc kubenswrapper[5113]: I1208 17:40:41.791993 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:41 crc kubenswrapper[5113]: E1208 17:40:41.792411 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.174730 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.196314 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.196621 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.197627 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.197672 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.197686 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:42 crc kubenswrapper[5113]: E1208 17:40:42.198085 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.792883 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.793652 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.793688 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:42 crc kubenswrapper[5113]: I1208 17:40:42.793701 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:42 crc kubenswrapper[5113]: E1208 17:40:42.794084 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.448317 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.448828 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.450171 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.450215 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.450229 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:43 crc kubenswrapper[5113]: E1208 17:40:43.450733 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.540774 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.541179 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.542464 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.542533 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.542554 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:43 crc kubenswrapper[5113]: E1208 17:40:43.543201 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.547777 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.795094 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.795785 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.795832 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:43 crc kubenswrapper[5113]: I1208 17:40:43.795843 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:43 crc kubenswrapper[5113]: E1208 17:40:43.796248 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.612423 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.691657 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.691937 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.693105 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.693345 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.693525 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:44 crc kubenswrapper[5113]: E1208 17:40:44.694367 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:44 crc kubenswrapper[5113]: E1208 17:40:44.742293 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.798177 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.798847 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.798880 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:44 crc kubenswrapper[5113]: I1208 17:40:44.798892 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:44 crc kubenswrapper[5113]: E1208 17:40:44.799278 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:48 crc kubenswrapper[5113]: I1208 17:40:48.510276 5113 trace.go:236] Trace[2004513440]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:38.508) (total time: 10001ms): Dec 08 17:40:48 crc kubenswrapper[5113]: Trace[2004513440]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:40:48.510) Dec 08 17:40:48 crc kubenswrapper[5113]: Trace[2004513440]: [10.001360811s] [10.001360811s] END Dec 08 17:40:48 crc kubenswrapper[5113]: E1208 17:40:48.510326 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:48 crc kubenswrapper[5113]: I1208 17:40:48.604289 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 08 17:40:49 crc kubenswrapper[5113]: I1208 17:40:49.176000 5113 trace.go:236] Trace[395325293]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:39.173) (total time: 10002ms): Dec 08 17:40:49 crc kubenswrapper[5113]: Trace[395325293]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:40:49.175) Dec 08 17:40:49 crc kubenswrapper[5113]: Trace[395325293]: [10.002002856s] [10.002002856s] END Dec 08 17:40:49 crc kubenswrapper[5113]: E1208 17:40:49.176070 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:49 crc kubenswrapper[5113]: I1208 17:40:49.720330 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:49 crc kubenswrapper[5113]: I1208 17:40:49.720426 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 17:40:49 crc kubenswrapper[5113]: I1208 17:40:49.746205 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:49 crc kubenswrapper[5113]: I1208 17:40:49.763963 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.383109 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.383766 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.385074 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.385111 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.385121 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:50 crc kubenswrapper[5113]: E1208 17:40:50.385531 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.413236 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.819725 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.820427 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.820487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.820500 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:50 crc kubenswrapper[5113]: E1208 17:40:50.820988 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:50 crc kubenswrapper[5113]: I1208 17:40:50.834004 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 17:40:50 crc kubenswrapper[5113]: E1208 17:40:50.837152 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 08 17:40:51 crc kubenswrapper[5113]: I1208 17:40:51.686128 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:40:51 crc kubenswrapper[5113]: I1208 17:40:51.686262 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:40:51 crc kubenswrapper[5113]: I1208 17:40:51.822610 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:51 crc kubenswrapper[5113]: I1208 17:40:51.823436 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:51 crc kubenswrapper[5113]: I1208 17:40:51.823492 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:51 crc kubenswrapper[5113]: I1208 17:40:51.823508 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:51 crc kubenswrapper[5113]: E1208 17:40:51.824024 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.181364 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.181757 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.182913 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.182958 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.182974 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:52 crc kubenswrapper[5113]: E1208 17:40:52.183327 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.189005 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.826075 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.826742 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.826798 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:52 crc kubenswrapper[5113]: I1208 17:40:52.826820 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:52 crc kubenswrapper[5113]: E1208 17:40:52.827468 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:52 crc kubenswrapper[5113]: E1208 17:40:52.926921 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:40:52 crc kubenswrapper[5113]: E1208 17:40:52.972383 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.725297 5113 trace.go:236] Trace[1164014020]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:42.247) (total time: 12477ms): Dec 08 17:40:54 crc kubenswrapper[5113]: Trace[1164014020]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12477ms (17:40:54.725) Dec 08 17:40:54 crc kubenswrapper[5113]: Trace[1164014020]: [12.477491872s] [12.477491872s] END Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.725353 5113 trace.go:236] Trace[187106460]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 17:40:43.718) (total time: 11006ms): Dec 08 17:40:54 crc kubenswrapper[5113]: Trace[187106460]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 11006ms (17:40:54.725) Dec 08 17:40:54 crc kubenswrapper[5113]: Trace[187106460]: [11.006738825s] [11.006738825s] END Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.725376 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.725401 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.725356 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.725501 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e446518bd25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.613386533 +0000 UTC m=+0.329179649,LastTimestamp:2025-12-08 17:40:34.613386533 +0000 UTC m=+0.329179649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.730076 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.730184 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.733913 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.739193 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a6327d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,LastTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.742514 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.745735 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e446ca23d38 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.739838264 +0000 UTC m=+0.455631380,LastTimestamp:2025-12-08 17:40:34.739838264 +0000 UTC m=+0.455631380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.754780 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a5ade9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.780756262 +0000 UTC m=+0.496549378,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.754980 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.760138 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a60767\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.780781172 +0000 UTC m=+0.496574288,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.771845 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a6327d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a6327d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,LastTimestamp:2025-12-08 17:40:34.780791553 +0000 UTC m=+0.496584669,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.781382 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a5ade9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.782236351 +0000 UTC m=+0.498029467,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.786472 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50592->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.786598 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50592->192.168.126.11:17697: read: connection reset by peer" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.787212 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.787269 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.790059 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a60767\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.782251831 +0000 UTC m=+0.498044947,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.798008 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a6327d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a6327d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,LastTimestamp:2025-12-08 17:40:34.782262852 +0000 UTC m=+0.498055968,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.802434 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a5ade9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.782649862 +0000 UTC m=+0.498442988,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.806450 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a60767\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.782672783 +0000 UTC m=+0.498465899,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.810529 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a6327d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a6327d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,LastTimestamp:2025-12-08 17:40:34.782686333 +0000 UTC m=+0.498479449,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.814929 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a5ade9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.783582477 +0000 UTC m=+0.499375593,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.819180 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a60767\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.783602057 +0000 UTC m=+0.499395173,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.824223 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a6327d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a6327d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,LastTimestamp:2025-12-08 17:40:34.783612768 +0000 UTC m=+0.499405884,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.828820 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a5ade9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.783834313 +0000 UTC m=+0.499627429,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.833297 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a60767\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.783851884 +0000 UTC m=+0.499645000,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.837765 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a6327d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a6327d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,LastTimestamp:2025-12-08 17:40:34.783863594 +0000 UTC m=+0.499656710,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.841907 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.841956 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a5ade9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.78482893 +0000 UTC m=+0.500622046,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.842170 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.845981 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.846017 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:54 crc kubenswrapper[5113]: I1208 17:40:54.846028 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.846391 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.846585 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a60767\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.784930753 +0000 UTC m=+0.500723869,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.853053 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a6327d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a6327d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689766013 +0000 UTC m=+0.405559129,LastTimestamp:2025-12-08 17:40:34.785011345 +0000 UTC m=+0.500804461,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.858827 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a5ade9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a5ade9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689732073 +0000 UTC m=+0.405525189,LastTimestamp:2025-12-08 17:40:34.785634891 +0000 UTC m=+0.501428007,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.864812 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f4e4469a60767\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f4e4469a60767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:34.689754983 +0000 UTC m=+0.405548099,LastTimestamp:2025-12-08 17:40:34.785660292 +0000 UTC m=+0.501453408,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.871227 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e4484df237d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.146482557 +0000 UTC m=+0.862275673,LastTimestamp:2025-12-08 17:40:35.146482557 +0000 UTC m=+0.862275673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.878832 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e44860c33f3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.166213107 +0000 UTC m=+0.882006213,LastTimestamp:2025-12-08 17:40:35.166213107 +0000 UTC m=+0.882006213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.884169 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4486b7fcda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.177471194 +0000 UTC m=+0.893264300,LastTimestamp:2025-12-08 17:40:35.177471194 +0000 UTC m=+0.893264300,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.890922 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e44870ca10d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.183018253 +0000 UTC m=+0.898811369,LastTimestamp:2025-12-08 17:40:35.183018253 +0000 UTC m=+0.898811369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.896764 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44875b2f9f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.188166559 +0000 UTC m=+0.903959665,LastTimestamp:2025-12-08 17:40:35.188166559 +0000 UTC m=+0.903959665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.901567 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44a6961f63 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.712122723 +0000 UTC m=+1.427915839,LastTimestamp:2025-12-08 17:40:35.712122723 +0000 UTC m=+1.427915839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.908846 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44a6966a27 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.712141863 +0000 UTC m=+1.427934979,LastTimestamp:2025-12-08 17:40:35.712141863 +0000 UTC m=+1.427934979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.913669 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e44a697d367 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.712234343 +0000 UTC m=+1.428027459,LastTimestamp:2025-12-08 17:40:35.712234343 +0000 UTC m=+1.428027459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.919200 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e44a6a5f35c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.713160028 +0000 UTC m=+1.428953144,LastTimestamp:2025-12-08 17:40:35.713160028 +0000 UTC m=+1.428953144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.923191 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e44a6df4287 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.716915847 +0000 UTC m=+1.432708963,LastTimestamp:2025-12-08 17:40:35.716915847 +0000 UTC m=+1.432708963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.928291 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44a72b3782 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.721893762 +0000 UTC m=+1.437686878,LastTimestamp:2025-12-08 17:40:35.721893762 +0000 UTC m=+1.437686878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.936122 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44a74311c0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.72345696 +0000 UTC m=+1.439250076,LastTimestamp:2025-12-08 17:40:35.72345696 +0000 UTC m=+1.439250076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.940089 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e44a7dbce07 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.733466631 +0000 UTC m=+1.449259737,LastTimestamp:2025-12-08 17:40:35.733466631 +0000 UTC m=+1.449259737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.945056 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44a8214cc8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.738021064 +0000 UTC m=+1.453814180,LastTimestamp:2025-12-08 17:40:35.738021064 +0000 UTC m=+1.453814180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.950211 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e44a827714a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.738423626 +0000 UTC m=+1.454216752,LastTimestamp:2025-12-08 17:40:35.738423626 +0000 UTC m=+1.454216752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.955238 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e44a8279e72 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:35.738435186 +0000 UTC m=+1.454228302,LastTimestamp:2025-12-08 17:40:35.738435186 +0000 UTC m=+1.454228302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.960214 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44b9dd3209 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.035547657 +0000 UTC m=+1.751340773,LastTimestamp:2025-12-08 17:40:36.035547657 +0000 UTC m=+1.751340773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.965892 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44bc0d76e4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.072265444 +0000 UTC m=+1.788058560,LastTimestamp:2025-12-08 17:40:36.072265444 +0000 UTC m=+1.788058560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.970548 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44bc24ad8a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.073786762 +0000 UTC m=+1.789579878,LastTimestamp:2025-12-08 17:40:36.073786762 +0000 UTC m=+1.789579878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.975252 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44d1d6f657 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.437792343 +0000 UTC m=+2.153585469,LastTimestamp:2025-12-08 17:40:36.437792343 +0000 UTC m=+2.153585469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.988427 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44d2c9feac openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.453719724 +0000 UTC m=+2.169512840,LastTimestamp:2025-12-08 17:40:36.453719724 +0000 UTC m=+2.169512840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.992941 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44d2ddd72a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.45502033 +0000 UTC m=+2.170813446,LastTimestamp:2025-12-08 17:40:36.45502033 +0000 UTC m=+2.170813446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:54 crc kubenswrapper[5113]: E1208 17:40:54.996549 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44dfc39b8d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.671404941 +0000 UTC m=+2.387198057,LastTimestamp:2025-12-08 17:40:36.671404941 +0000 UTC m=+2.387198057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.000516 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e44e1311c4f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.695358543 +0000 UTC m=+2.411151659,LastTimestamp:2025-12-08 17:40:36.695358543 +0000 UTC m=+2.411151659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.005836 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e44e201c1b4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.709032372 +0000 UTC m=+2.424825488,LastTimestamp:2025-12-08 17:40:36.709032372 +0000 UTC m=+2.424825488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.010498 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e44e2d37a42 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.722776642 +0000 UTC m=+2.438569758,LastTimestamp:2025-12-08 17:40:36.722776642 +0000 UTC m=+2.438569758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.015562 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44e2f4ae1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.724952603 +0000 UTC m=+2.440745719,LastTimestamp:2025-12-08 17:40:36.724952603 +0000 UTC m=+2.440745719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.019757 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e44e2f4c225 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:36.724957733 +0000 UTC m=+2.440750849,LastTimestamp:2025-12-08 17:40:36.724957733 +0000 UTC m=+2.440750849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.023755 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e44f6cc7c23 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.057862691 +0000 UTC m=+2.773655807,LastTimestamp:2025-12-08 17:40:37.057862691 +0000 UTC m=+2.773655807,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.030059 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e44f6cda2d5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.057938133 +0000 UTC m=+2.773731239,LastTimestamp:2025-12-08 17:40:37.057938133 +0000 UTC m=+2.773731239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.034728 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e44f6ce062b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.057963563 +0000 UTC m=+2.773756679,LastTimestamp:2025-12-08 17:40:37.057963563 +0000 UTC m=+2.773756679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.039224 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44f713f339 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.062546233 +0000 UTC m=+2.778339349,LastTimestamp:2025-12-08 17:40:37.062546233 +0000 UTC m=+2.778339349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.041530 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e44f78521dd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.069963741 +0000 UTC m=+2.785756857,LastTimestamp:2025-12-08 17:40:37.069963741 +0000 UTC m=+2.785756857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.043721 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f4e44f790c7bc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.0707271 +0000 UTC m=+2.786520216,LastTimestamp:2025-12-08 17:40:37.0707271 +0000 UTC m=+2.786520216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.046080 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e44f79533fd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.071016957 +0000 UTC m=+2.786810073,LastTimestamp:2025-12-08 17:40:37.071016957 +0000 UTC m=+2.786810073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.048772 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e44f7e618e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.076318434 +0000 UTC m=+2.792111550,LastTimestamp:2025-12-08 17:40:37.076318434 +0000 UTC m=+2.792111550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.051182 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44f9acfbf5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.106129909 +0000 UTC m=+2.821923025,LastTimestamp:2025-12-08 17:40:37.106129909 +0000 UTC m=+2.821923025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.052916 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e44f9e3e798 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.109729176 +0000 UTC m=+2.825522292,LastTimestamp:2025-12-08 17:40:37.109729176 +0000 UTC m=+2.825522292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.055436 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e450736b1ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.33325867 +0000 UTC m=+3.049051786,LastTimestamp:2025-12-08 17:40:37.33325867 +0000 UTC m=+3.049051786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.057808 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e450736f43e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.33327571 +0000 UTC m=+3.049068826,LastTimestamp:2025-12-08 17:40:37.33327571 +0000 UTC m=+3.049068826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.062104 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e4507f938ce openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.346007246 +0000 UTC m=+3.061800362,LastTimestamp:2025-12-08 17:40:37.346007246 +0000 UTC m=+3.061800362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.066670 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e45080064fb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.346477307 +0000 UTC m=+3.062270423,LastTimestamp:2025-12-08 17:40:37.346477307 +0000 UTC m=+3.062270423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.070917 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e45080b3aea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.347187434 +0000 UTC m=+3.062980550,LastTimestamp:2025-12-08 17:40:37.347187434 +0000 UTC m=+3.062980550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.077524 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e450815e267 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.347885671 +0000 UTC m=+3.063678787,LastTimestamp:2025-12-08 17:40:37.347885671 +0000 UTC m=+3.063678787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.083078 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4516daeccc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.595679948 +0000 UTC m=+3.311473064,LastTimestamp:2025-12-08 17:40:37.595679948 +0000 UTC m=+3.311473064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.088952 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e4516db3b1e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.595699998 +0000 UTC m=+3.311493114,LastTimestamp:2025-12-08 17:40:37.595699998 +0000 UTC m=+3.311493114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.093445 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f4e4517c238f6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.610838262 +0000 UTC m=+3.326631368,LastTimestamp:2025-12-08 17:40:37.610838262 +0000 UTC m=+3.326631368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.098440 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4517d426a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.61201322 +0000 UTC m=+3.327806336,LastTimestamp:2025-12-08 17:40:37.61201322 +0000 UTC m=+3.327806336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.103705 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4517e55a39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.613140537 +0000 UTC m=+3.328933653,LastTimestamp:2025-12-08 17:40:37.613140537 +0000 UTC m=+3.328933653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.121280 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e4520ecd01d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.764624413 +0000 UTC m=+3.480417529,LastTimestamp:2025-12-08 17:40:37.764624413 +0000 UTC m=+3.480417529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.125871 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4526d688f7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.863827703 +0000 UTC m=+3.579620819,LastTimestamp:2025-12-08 17:40:37.863827703 +0000 UTC m=+3.579620819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.130671 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4527f51970 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.882607984 +0000 UTC m=+3.598401100,LastTimestamp:2025-12-08 17:40:37.882607984 +0000 UTC m=+3.598401100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.134837 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4528498fd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.888143317 +0000 UTC m=+3.603936433,LastTimestamp:2025-12-08 17:40:37.888143317 +0000 UTC m=+3.603936433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.139360 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e453107109c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.034780316 +0000 UTC m=+3.750573432,LastTimestamp:2025-12-08 17:40:38.034780316 +0000 UTC m=+3.750573432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.143726 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e453280b18e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.05952859 +0000 UTC m=+3.775321706,LastTimestamp:2025-12-08 17:40:38.05952859 +0000 UTC m=+3.775321706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.146978 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e453987463b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.177400379 +0000 UTC m=+3.893193495,LastTimestamp:2025-12-08 17:40:38.177400379 +0000 UTC m=+3.893193495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.149014 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e453a289d22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.187973922 +0000 UTC m=+3.903767038,LastTimestamp:2025-12-08 17:40:38.187973922 +0000 UTC m=+3.903767038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.156347 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e455d45959c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.7770751 +0000 UTC m=+4.492868216,LastTimestamp:2025-12-08 17:40:38.7770751 +0000 UTC m=+4.492868216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.161051 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e4569c36a74 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.98664818 +0000 UTC m=+4.702441296,LastTimestamp:2025-12-08 17:40:38.98664818 +0000 UTC m=+4.702441296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.166224 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e456a48bd82 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.99538573 +0000 UTC m=+4.711178836,LastTimestamp:2025-12-08 17:40:38.99538573 +0000 UTC m=+4.711178836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.171193 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e456a5964f8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.996477176 +0000 UTC m=+4.712270292,LastTimestamp:2025-12-08 17:40:38.996477176 +0000 UTC m=+4.712270292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.176532 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e4577ef59be openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.22440851 +0000 UTC m=+4.940201626,LastTimestamp:2025-12-08 17:40:39.22440851 +0000 UTC m=+4.940201626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.182909 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e457894b057 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.235244119 +0000 UTC m=+4.951037235,LastTimestamp:2025-12-08 17:40:39.235244119 +0000 UTC m=+4.951037235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.188149 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e4578a95eaa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.236599466 +0000 UTC m=+4.952392582,LastTimestamp:2025-12-08 17:40:39.236599466 +0000 UTC m=+4.952392582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.193435 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e458ccf9a65 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.574649445 +0000 UTC m=+5.290442561,LastTimestamp:2025-12-08 17:40:39.574649445 +0000 UTC m=+5.290442561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.197749 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e458dd040f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.591469301 +0000 UTC m=+5.307262417,LastTimestamp:2025-12-08 17:40:39.591469301 +0000 UTC m=+5.307262417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.202764 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e458de86ffd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.593054205 +0000 UTC m=+5.308847321,LastTimestamp:2025-12-08 17:40:39.593054205 +0000 UTC m=+5.308847321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.207367 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e459e681b42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.869856578 +0000 UTC m=+5.585649684,LastTimestamp:2025-12-08 17:40:39.869856578 +0000 UTC m=+5.585649684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.212756 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e459f537ac1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.885281985 +0000 UTC m=+5.601075101,LastTimestamp:2025-12-08 17:40:39.885281985 +0000 UTC m=+5.601075101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.218179 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e459f6b4bf4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:39.886842868 +0000 UTC m=+5.602635984,LastTimestamp:2025-12-08 17:40:39.886842868 +0000 UTC m=+5.602635984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.224087 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e45cb41cb42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:40.62232045 +0000 UTC m=+6.338113556,LastTimestamp:2025-12-08 17:40:40.62232045 +0000 UTC m=+6.338113556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.231490 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f4e45cf0082ef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:40.685150959 +0000 UTC m=+6.400944075,LastTimestamp:2025-12-08 17:40:40.685150959 +0000 UTC m=+6.400944075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.244333 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 17:40:55 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-controller-manager-crc.187f4e460a9709b7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 08 17:40:55 crc kubenswrapper[5113]: body: Dec 08 17:40:55 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:41.684871607 +0000 UTC m=+7.400664773,LastTimestamp:2025-12-08 17:40:41.684871607 +0000 UTC m=+7.400664773,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:55 crc kubenswrapper[5113]: > Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.248693 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e460a992024 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:41.68500842 +0000 UTC m=+7.400801556,LastTimestamp:2025-12-08 17:40:41.68500842 +0000 UTC m=+7.400801556,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.255095 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:55 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e47e98b4eb2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 17:40:55 crc kubenswrapper[5113]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:55 crc kubenswrapper[5113]: Dec 08 17:40:55 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:49.720389298 +0000 UTC m=+15.436182434,LastTimestamp:2025-12-08 17:40:49.720389298 +0000 UTC m=+15.436182434,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:55 crc kubenswrapper[5113]: > Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.259944 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e47e98c6e48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:49.72046292 +0000 UTC m=+15.436256036,LastTimestamp:2025-12-08 17:40:49.72046292 +0000 UTC m=+15.436256036,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.266103 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e47e98b4eb2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:55 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e47e98b4eb2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 17:40:55 crc kubenswrapper[5113]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 17:40:55 crc kubenswrapper[5113]: Dec 08 17:40:55 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:49.720389298 +0000 UTC m=+15.436182434,LastTimestamp:2025-12-08 17:40:49.763875961 +0000 UTC m=+15.479669097,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:55 crc kubenswrapper[5113]: > Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.271200 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e47e98c6e48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e47e98c6e48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:49.72046292 +0000 UTC m=+15.436256036,LastTimestamp:2025-12-08 17:40:49.764299462 +0000 UTC m=+15.480092598,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.276797 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 17:40:55 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-controller-manager-crc.187f4e485eb79609 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 17:40:55 crc kubenswrapper[5113]: body: Dec 08 17:40:55 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:51.686225417 +0000 UTC m=+17.402018533,LastTimestamp:2025-12-08 17:40:51.686225417 +0000 UTC m=+17.402018533,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:55 crc kubenswrapper[5113]: > Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.284244 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f4e485eb89b22 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:51.686292258 +0000 UTC m=+17.402085374,LastTimestamp:2025-12-08 17:40:51.686292258 +0000 UTC m=+17.402085374,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.298467 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:55 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e491782b00f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:50592->192.168.126.11:17697: read: connection reset by peer Dec 08 17:40:55 crc kubenswrapper[5113]: body: Dec 08 17:40:55 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:54.786543631 +0000 UTC m=+20.502336747,LastTimestamp:2025-12-08 17:40:54.786543631 +0000 UTC m=+20.502336747,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:55 crc kubenswrapper[5113]: > Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.308163 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4917842758 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50592->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:54.786639704 +0000 UTC m=+20.502432820,LastTimestamp:2025-12-08 17:40:54.786639704 +0000 UTC m=+20.502432820,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.317064 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 17:40:55 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.187f4e49178d677d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 17:40:55 crc kubenswrapper[5113]: body: Dec 08 17:40:55 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:54.787245949 +0000 UTC m=+20.503039065,LastTimestamp:2025-12-08 17:40:54.787245949 +0000 UTC m=+20.503039065,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 17:40:55 crc kubenswrapper[5113]: > Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.344448 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49178e0c6b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:54.787288171 +0000 UTC m=+20.503081287,LastTimestamp:2025-12-08 17:40:54.787288171 +0000 UTC m=+20.503081287,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.615288 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.836220 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.838689 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c8a5bc6d4e596518d9c0550a369d36666642b52fd92ec859ccb3886a8c5c9f92" exitCode=255 Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.838760 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c8a5bc6d4e596518d9c0550a369d36666642b52fd92ec859ccb3886a8c5c9f92"} Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.839071 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.839781 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.839824 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.839844 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.840235 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:55 crc kubenswrapper[5113]: I1208 17:40:55.840493 5113 scope.go:117] "RemoveContainer" containerID="c8a5bc6d4e596518d9c0550a369d36666642b52fd92ec859ccb3886a8c5c9f92" Dec 08 17:40:55 crc kubenswrapper[5113]: E1208 17:40:55.847646 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e4528498fd5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4528498fd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.888143317 +0000 UTC m=+3.603936433,LastTimestamp:2025-12-08 17:40:55.841901031 +0000 UTC m=+21.557694137,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:56 crc kubenswrapper[5113]: E1208 17:40:56.102366 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e453987463b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e453987463b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.177400379 +0000 UTC m=+3.893193495,LastTimestamp:2025-12-08 17:40:56.095399231 +0000 UTC m=+21.811192347,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:56 crc kubenswrapper[5113]: E1208 17:40:56.208520 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e453a289d22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e453a289d22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.187973922 +0000 UTC m=+3.903767038,LastTimestamp:2025-12-08 17:40:56.2016137 +0000 UTC m=+21.917406816,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.438298 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.609696 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.845062 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.847223 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe"} Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.847483 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.848399 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.848443 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:56 crc kubenswrapper[5113]: I1208 17:40:56.848457 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:56 crc kubenswrapper[5113]: E1208 17:40:56.848833 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:57 crc kubenswrapper[5113]: E1208 17:40:57.263236 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.607863 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.857702 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.858377 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.860310 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe" exitCode=255 Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.860449 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe"} Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.860556 5113 scope.go:117] "RemoveContainer" containerID="c8a5bc6d4e596518d9c0550a369d36666642b52fd92ec859ccb3886a8c5c9f92" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.860605 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.861747 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.861777 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.861832 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:57 crc kubenswrapper[5113]: E1208 17:40:57.862121 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:57 crc kubenswrapper[5113]: I1208 17:40:57.862367 5113 scope.go:117] "RemoveContainer" containerID="bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe" Dec 08 17:40:57 crc kubenswrapper[5113]: E1208 17:40:57.862553 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:40:57 crc kubenswrapper[5113]: E1208 17:40:57.867117 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49ceda81ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,LastTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.608329 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.691244 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.691462 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.692550 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.692684 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.692785 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:58 crc kubenswrapper[5113]: E1208 17:40:58.693368 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.695730 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.865759 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.868700 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.868900 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.869685 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.869757 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.869771 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.870102 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.870148 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.870162 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:40:58 crc kubenswrapper[5113]: E1208 17:40:58.870314 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:58 crc kubenswrapper[5113]: E1208 17:40:58.870618 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:40:58 crc kubenswrapper[5113]: I1208 17:40:58.871061 5113 scope.go:117] "RemoveContainer" containerID="bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe" Dec 08 17:40:58 crc kubenswrapper[5113]: E1208 17:40:58.871372 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:40:58 crc kubenswrapper[5113]: E1208 17:40:58.877061 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e49ceda81ca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49ceda81ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,LastTimestamp:2025-12-08 17:40:58.87132307 +0000 UTC m=+24.587116206,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:40:59 crc kubenswrapper[5113]: I1208 17:40:59.609105 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:00 crc kubenswrapper[5113]: I1208 17:41:00.610218 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:01 crc kubenswrapper[5113]: I1208 17:41:01.130663 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:01 crc kubenswrapper[5113]: I1208 17:41:01.131750 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:01 crc kubenswrapper[5113]: I1208 17:41:01.131800 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:01 crc kubenswrapper[5113]: I1208 17:41:01.131812 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:01 crc kubenswrapper[5113]: I1208 17:41:01.131837 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:01 crc kubenswrapper[5113]: E1208 17:41:01.142138 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:01 crc kubenswrapper[5113]: I1208 17:41:01.610825 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:02 crc kubenswrapper[5113]: E1208 17:41:02.222779 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:41:02 crc kubenswrapper[5113]: I1208 17:41:02.609400 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:02 crc kubenswrapper[5113]: E1208 17:41:02.661963 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:41:03 crc kubenswrapper[5113]: I1208 17:41:03.611017 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:04 crc kubenswrapper[5113]: E1208 17:41:04.268772 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:04 crc kubenswrapper[5113]: E1208 17:41:04.570883 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:41:04 crc kubenswrapper[5113]: I1208 17:41:04.610992 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:04 crc kubenswrapper[5113]: E1208 17:41:04.742800 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:04 crc kubenswrapper[5113]: E1208 17:41:04.980027 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:41:05 crc kubenswrapper[5113]: I1208 17:41:05.609999 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.438978 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.439377 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.440860 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.440942 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.440967 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:06 crc kubenswrapper[5113]: E1208 17:41:06.441864 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.442422 5113 scope.go:117] "RemoveContainer" containerID="bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe" Dec 08 17:41:06 crc kubenswrapper[5113]: E1208 17:41:06.442836 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:06 crc kubenswrapper[5113]: E1208 17:41:06.449320 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e49ceda81ca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49ceda81ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,LastTimestamp:2025-12-08 17:41:06.442776736 +0000 UTC m=+32.158569892,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.608892 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.848588 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.891993 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.892689 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.892752 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.892771 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:06 crc kubenswrapper[5113]: E1208 17:41:06.893340 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:06 crc kubenswrapper[5113]: I1208 17:41:06.893669 5113 scope.go:117] "RemoveContainer" containerID="bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe" Dec 08 17:41:06 crc kubenswrapper[5113]: E1208 17:41:06.893923 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:06 crc kubenswrapper[5113]: E1208 17:41:06.899636 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e49ceda81ca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49ceda81ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,LastTimestamp:2025-12-08 17:41:06.893880246 +0000 UTC m=+32.609673362,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:07 crc kubenswrapper[5113]: I1208 17:41:07.609421 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:08 crc kubenswrapper[5113]: I1208 17:41:08.142307 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:08 crc kubenswrapper[5113]: I1208 17:41:08.143926 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:08 crc kubenswrapper[5113]: I1208 17:41:08.144119 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:08 crc kubenswrapper[5113]: I1208 17:41:08.144225 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:08 crc kubenswrapper[5113]: I1208 17:41:08.144341 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:08 crc kubenswrapper[5113]: E1208 17:41:08.158677 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:08 crc kubenswrapper[5113]: I1208 17:41:08.610227 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:09 crc kubenswrapper[5113]: I1208 17:41:09.608871 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:10 crc kubenswrapper[5113]: I1208 17:41:10.608749 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:11 crc kubenswrapper[5113]: E1208 17:41:11.279011 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:11 crc kubenswrapper[5113]: I1208 17:41:11.612505 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:12 crc kubenswrapper[5113]: I1208 17:41:12.612238 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:13 crc kubenswrapper[5113]: I1208 17:41:13.609767 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:14 crc kubenswrapper[5113]: I1208 17:41:14.608380 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:14 crc kubenswrapper[5113]: E1208 17:41:14.743567 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:15 crc kubenswrapper[5113]: I1208 17:41:15.159667 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:15 crc kubenswrapper[5113]: I1208 17:41:15.160837 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:15 crc kubenswrapper[5113]: I1208 17:41:15.160876 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:15 crc kubenswrapper[5113]: I1208 17:41:15.160888 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:15 crc kubenswrapper[5113]: I1208 17:41:15.160914 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:15 crc kubenswrapper[5113]: E1208 17:41:15.177710 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:15 crc kubenswrapper[5113]: I1208 17:41:15.608733 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:16 crc kubenswrapper[5113]: I1208 17:41:16.609208 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:17 crc kubenswrapper[5113]: I1208 17:41:17.609306 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:18 crc kubenswrapper[5113]: E1208 17:41:18.284920 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:18 crc kubenswrapper[5113]: I1208 17:41:18.608248 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:19 crc kubenswrapper[5113]: E1208 17:41:19.589242 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.608341 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.679932 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.680973 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.681017 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.681061 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:19 crc kubenswrapper[5113]: E1208 17:41:19.681476 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.681729 5113 scope.go:117] "RemoveContainer" containerID="bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe" Dec 08 17:41:19 crc kubenswrapper[5113]: E1208 17:41:19.689492 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e4528498fd5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e4528498fd5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:37.888143317 +0000 UTC m=+3.603936433,LastTimestamp:2025-12-08 17:41:19.683283497 +0000 UTC m=+45.399076613,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:19 crc kubenswrapper[5113]: E1208 17:41:19.904615 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e453987463b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e453987463b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.177400379 +0000 UTC m=+3.893193495,LastTimestamp:2025-12-08 17:41:19.895727616 +0000 UTC m=+45.611520772,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:19 crc kubenswrapper[5113]: E1208 17:41:19.914994 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e453a289d22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e453a289d22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:38.187973922 +0000 UTC m=+3.903767038,LastTimestamp:2025-12-08 17:41:19.90954765 +0000 UTC m=+45.625340806,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.930886 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.932892 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56"} Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.933280 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.934225 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.934273 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:19 crc kubenswrapper[5113]: I1208 17:41:19.934285 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:19 crc kubenswrapper[5113]: E1208 17:41:19.934739 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:20 crc kubenswrapper[5113]: E1208 17:41:20.194577 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 17:41:20 crc kubenswrapper[5113]: I1208 17:41:20.609549 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.609696 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.940182 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.940708 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.943283 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56" exitCode=255 Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.943342 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56"} Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.943386 5113 scope.go:117] "RemoveContainer" containerID="bf6bfbd315108895f422aad995096bee9387f376949237d7f2797f7193e2e6fe" Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.943631 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.944578 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.944637 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.944654 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:21 crc kubenswrapper[5113]: E1208 17:41:21.945125 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:21 crc kubenswrapper[5113]: I1208 17:41:21.945469 5113 scope.go:117] "RemoveContainer" containerID="bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56" Dec 08 17:41:21 crc kubenswrapper[5113]: E1208 17:41:21.945776 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:21 crc kubenswrapper[5113]: E1208 17:41:21.951984 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e49ceda81ca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49ceda81ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,LastTimestamp:2025-12-08 17:41:21.94571867 +0000 UTC m=+47.661511786,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:22 crc kubenswrapper[5113]: I1208 17:41:22.177894 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:22 crc kubenswrapper[5113]: I1208 17:41:22.179339 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:22 crc kubenswrapper[5113]: I1208 17:41:22.179395 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:22 crc kubenswrapper[5113]: I1208 17:41:22.179407 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:22 crc kubenswrapper[5113]: I1208 17:41:22.179440 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:22 crc kubenswrapper[5113]: E1208 17:41:22.192712 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:22 crc kubenswrapper[5113]: E1208 17:41:22.228404 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 17:41:22 crc kubenswrapper[5113]: I1208 17:41:22.605244 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:22 crc kubenswrapper[5113]: I1208 17:41:22.949078 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:23 crc kubenswrapper[5113]: I1208 17:41:23.614575 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:24 crc kubenswrapper[5113]: I1208 17:41:24.607754 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:24 crc kubenswrapper[5113]: I1208 17:41:24.697228 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:41:24 crc kubenswrapper[5113]: I1208 17:41:24.697445 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:24 crc kubenswrapper[5113]: I1208 17:41:24.698208 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:24 crc kubenswrapper[5113]: I1208 17:41:24.698240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:24 crc kubenswrapper[5113]: I1208 17:41:24.698253 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:24 crc kubenswrapper[5113]: E1208 17:41:24.698543 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:24 crc kubenswrapper[5113]: E1208 17:41:24.744160 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:25 crc kubenswrapper[5113]: E1208 17:41:25.292879 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:25 crc kubenswrapper[5113]: I1208 17:41:25.608872 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:26 crc kubenswrapper[5113]: I1208 17:41:26.438741 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:26 crc kubenswrapper[5113]: I1208 17:41:26.438998 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:26 crc kubenswrapper[5113]: I1208 17:41:26.440317 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:26 crc kubenswrapper[5113]: I1208 17:41:26.440367 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:26 crc kubenswrapper[5113]: I1208 17:41:26.440377 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:26 crc kubenswrapper[5113]: E1208 17:41:26.440702 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:26 crc kubenswrapper[5113]: I1208 17:41:26.440949 5113 scope.go:117] "RemoveContainer" containerID="bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56" Dec 08 17:41:26 crc kubenswrapper[5113]: E1208 17:41:26.441151 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:26 crc kubenswrapper[5113]: E1208 17:41:26.445990 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e49ceda81ca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49ceda81ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,LastTimestamp:2025-12-08 17:41:26.44112348 +0000 UTC m=+52.156916596,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:26 crc kubenswrapper[5113]: E1208 17:41:26.549877 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 17:41:26 crc kubenswrapper[5113]: I1208 17:41:26.607741 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:27 crc kubenswrapper[5113]: I1208 17:41:27.608791 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:28 crc kubenswrapper[5113]: I1208 17:41:28.620402 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.193671 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.195090 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.195139 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.195149 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.195173 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:29 crc kubenswrapper[5113]: E1208 17:41:29.204022 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.610398 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.933793 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.934071 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.935236 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.935281 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.935297 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:29 crc kubenswrapper[5113]: E1208 17:41:29.935727 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:29 crc kubenswrapper[5113]: I1208 17:41:29.935994 5113 scope.go:117] "RemoveContainer" containerID="bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56" Dec 08 17:41:29 crc kubenswrapper[5113]: E1208 17:41:29.936230 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:29 crc kubenswrapper[5113]: E1208 17:41:29.942132 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f4e49ceda81ca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f4e49ceda81ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:40:57.862529482 +0000 UTC m=+23.578322598,LastTimestamp:2025-12-08 17:41:29.93619914 +0000 UTC m=+55.651992246,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:41:30 crc kubenswrapper[5113]: I1208 17:41:30.609836 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:31 crc kubenswrapper[5113]: I1208 17:41:31.608775 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:32 crc kubenswrapper[5113]: E1208 17:41:32.298472 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:32 crc kubenswrapper[5113]: I1208 17:41:32.610063 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:33 crc kubenswrapper[5113]: I1208 17:41:33.607544 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:34 crc kubenswrapper[5113]: I1208 17:41:34.608403 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:34 crc kubenswrapper[5113]: E1208 17:41:34.744526 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:35 crc kubenswrapper[5113]: I1208 17:41:35.608901 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:36 crc kubenswrapper[5113]: I1208 17:41:36.204752 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:36 crc kubenswrapper[5113]: I1208 17:41:36.206403 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:36 crc kubenswrapper[5113]: I1208 17:41:36.206449 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:36 crc kubenswrapper[5113]: I1208 17:41:36.206461 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:36 crc kubenswrapper[5113]: I1208 17:41:36.206489 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:36 crc kubenswrapper[5113]: E1208 17:41:36.218388 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 17:41:36 crc kubenswrapper[5113]: I1208 17:41:36.607874 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:37 crc kubenswrapper[5113]: I1208 17:41:37.610966 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:38 crc kubenswrapper[5113]: I1208 17:41:38.607049 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:39 crc kubenswrapper[5113]: E1208 17:41:39.304137 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 17:41:39 crc kubenswrapper[5113]: I1208 17:41:39.607835 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 17:41:39 crc kubenswrapper[5113]: I1208 17:41:39.773664 5113 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-brdzt" Dec 08 17:41:39 crc kubenswrapper[5113]: I1208 17:41:39.779353 5113 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-brdzt" Dec 08 17:41:39 crc kubenswrapper[5113]: I1208 17:41:39.878120 5113 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 17:41:40 crc kubenswrapper[5113]: I1208 17:41:40.528453 5113 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 17:41:40 crc kubenswrapper[5113]: I1208 17:41:40.679789 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:40 crc kubenswrapper[5113]: I1208 17:41:40.681165 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:40 crc kubenswrapper[5113]: I1208 17:41:40.681334 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:40 crc kubenswrapper[5113]: I1208 17:41:40.681475 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:40 crc kubenswrapper[5113]: E1208 17:41:40.682172 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:40 crc kubenswrapper[5113]: I1208 17:41:40.954417 5113 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 17:36:39 +0000 UTC" deadline="2026-01-03 10:03:56.02379691 +0000 UTC" Dec 08 17:41:40 crc kubenswrapper[5113]: I1208 17:41:40.954751 5113 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="616h22m15.069051204s" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.219058 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.220713 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.220753 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.220764 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.220856 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.229473 5113 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.229882 5113 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.229982 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.233836 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.233879 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.233888 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.233903 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.233912 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.247350 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.256116 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.256181 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.256195 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.256216 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.256228 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.268326 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.275603 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.275682 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.275698 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.275714 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.275761 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.287111 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.294342 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.294369 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.294383 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.294399 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.294409 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:43Z","lastTransitionTime":"2025-12-08T17:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.304601 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.304809 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.304847 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.405144 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.506264 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.607290 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.679426 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.680510 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.680555 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.680570 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.681124 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:43 crc kubenswrapper[5113]: I1208 17:41:43.681451 5113 scope.go:117] "RemoveContainer" containerID="bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.707476 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.807775 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:43 crc kubenswrapper[5113]: E1208 17:41:43.908102 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.008405 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.109133 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.210251 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.310818 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.411738 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.512822 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.613167 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.713532 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.745356 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.814164 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:44 crc kubenswrapper[5113]: E1208 17:41:44.914764 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: I1208 17:41:45.008167 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:45 crc kubenswrapper[5113]: I1208 17:41:45.009721 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f"} Dec 08 17:41:45 crc kubenswrapper[5113]: I1208 17:41:45.009895 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:45 crc kubenswrapper[5113]: I1208 17:41:45.010475 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:45 crc kubenswrapper[5113]: I1208 17:41:45.010503 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:45 crc kubenswrapper[5113]: I1208 17:41:45.010511 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.010839 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.015773 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.116901 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.217094 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.317880 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.418459 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.518836 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.619947 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.720846 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.821884 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:45 crc kubenswrapper[5113]: E1208 17:41:45.922450 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.014522 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.015284 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.017592 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" exitCode=255 Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.017670 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f"} Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.017741 5113 scope.go:117] "RemoveContainer" containerID="bb58bb2dfedcbfbf1a87aa11b9e296abf195b8a9a3f94e2a3ba2d41f77efbe56" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.018185 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.019202 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.019285 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.019309 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.020149 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.020610 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.021008 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.023326 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.124250 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.225393 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.326580 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.427642 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: I1208 17:41:46.438226 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.527935 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.628947 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.729457 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.829871 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:46 crc kubenswrapper[5113]: E1208 17:41:46.930843 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: I1208 17:41:47.022005 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:41:47 crc kubenswrapper[5113]: I1208 17:41:47.023639 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 17:41:47 crc kubenswrapper[5113]: I1208 17:41:47.024228 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:47 crc kubenswrapper[5113]: I1208 17:41:47.024253 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:47 crc kubenswrapper[5113]: I1208 17:41:47.024263 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.024577 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 17:41:47 crc kubenswrapper[5113]: I1208 17:41:47.024782 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.024944 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.031553 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.131971 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.232933 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.333919 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.434901 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.535448 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.636610 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.736972 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.838082 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:47 crc kubenswrapper[5113]: E1208 17:41:47.939020 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.039545 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.140369 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.241531 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.342523 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.442882 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.543019 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.643755 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.744578 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.845384 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:48 crc kubenswrapper[5113]: E1208 17:41:48.946334 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.047371 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.148447 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.249410 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.350446 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.451527 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.552426 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.652815 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.754029 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.854259 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:49 crc kubenswrapper[5113]: E1208 17:41:49.955433 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.056280 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.156699 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.257127 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.357459 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.458192 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.558739 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.659605 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.760280 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.860790 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:50 crc kubenswrapper[5113]: E1208 17:41:50.961394 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.062325 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.162839 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.263519 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.364090 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.465438 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.566861 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.667778 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.769094 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.869994 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:51 crc kubenswrapper[5113]: E1208 17:41:51.971132 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.072273 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.173446 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.274236 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.375638 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.476376 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.576511 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.637655 5113 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.644768 5113 apiserver.go:52] "Watching apiserver" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.649429 5113 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.650196 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rzvvg","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq","openshift-ovn-kubernetes/ovnkube-node-pjxmr","openshift-dns/node-resolver-ld988","openshift-machine-config-operator/machine-config-daemon-mf4d4","openshift-multus/multus-g9mkp","openshift-multus/network-metrics-daemon-bc5j2","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-image-registry/node-ca-jcdp7"] Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.651835 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.653085 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.653189 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.654439 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.654648 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.655377 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.655465 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.655484 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.655814 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.656925 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.657654 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.657748 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.658376 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.659581 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.660678 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.660853 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.662647 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.663257 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.678571 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.678645 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.678664 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.678694 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.678713 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.684344 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.698836 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.709278 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.725016 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.729815 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.736332 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.745803 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.756798 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.757961 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.758061 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8h7r\" (UniqueName: \"kubernetes.io/projected/88405869-34c6-458b-ab82-663f9a965335-kube-api-access-r8h7r\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.758134 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.758137 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.758325 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.258298111 +0000 UTC m=+78.974091237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.758783 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.758848 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.758885 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.758918 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.758949 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759217 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759333 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759370 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.759418 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.759556 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.259525699 +0000 UTC m=+78.975318855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759686 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759744 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759787 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759849 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759936 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.759976 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.760067 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.772667 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.772704 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.772720 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.772821 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.272798827 +0000 UTC m=+78.988591953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.776128 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.776173 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.776189 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.776284 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.276262824 +0000 UTC m=+78.992055950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.781361 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.781449 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.781472 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.781501 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.781523 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.860316 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.860466 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.860539 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r8h7r\" (UniqueName: \"kubernetes.io/projected/88405869-34c6-458b-ab82-663f9a965335-kube-api-access-r8h7r\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.860602 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.860680 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.860697 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.860746 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.860473 5113 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: object "openshift-ovn-kubernetes"/"env-overrides" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.860877 5113 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: object "openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.860880 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides podName:88405869-34c6-458b-ab82-663f9a965335 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.360844894 +0000 UTC m=+79.076638050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides") pod "ovnkube-control-plane-57b78d8988-k6xbq" (UID: "88405869-34c6-458b-ab82-663f9a965335") : object "openshift-ovn-kubernetes"/"env-overrides" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.860985 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert podName:88405869-34c6-458b-ab82-663f9a965335 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.360960337 +0000 UTC m=+79.076753493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-57b78d8988-k6xbq" (UID: "88405869-34c6-458b-ab82-663f9a965335") : object "openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.861114 5113 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.861155 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.861211 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config podName:88405869-34c6-458b-ab82-663f9a965335 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.361189812 +0000 UTC m=+79.076982958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config") pod "ovnkube-control-plane-57b78d8988-k6xbq" (UID: "88405869-34c6-458b-ab82-663f9a965335") : object "openshift-ovn-kubernetes"/"ovnkube-config" not registered Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.863229 5113 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.863396 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.864364 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.864588 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.871492 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.871840 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.872124 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.872449 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.873442 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.873700 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.875700 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.875984 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.876069 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.876280 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.877240 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.879225 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.879705 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.885103 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.885206 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.885254 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.885291 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.885316 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.893348 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8h7r\" (UniqueName: \"kubernetes.io/projected/88405869-34c6-458b-ab82-663f9a965335-kube-api-access-r8h7r\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.893996 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.906416 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.919011 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.928856 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88405869-34c6-458b-ab82-663f9a965335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k6xbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.931191 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.931318 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.931383 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.934106 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.934386 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.934762 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.934913 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.934921 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.936309 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.937724 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.937983 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.938075 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.938214 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.938270 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.939639 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.941193 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.941477 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.942078 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.942179 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.943893 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.944030 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.944199 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.944545 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.959847 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.961895 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-etc-kubernetes\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962012 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4621882-3d98-4910-9263-5959d2302427-cni-binary-copy\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962117 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-hostroot\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962210 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-k8s-cni-cncf-io\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962338 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-cnibin\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962439 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-multus-certs\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962539 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-system-cni-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962623 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4621882-3d98-4910-9263-5959d2302427-multus-daemon-config\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962698 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-netns\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.962820 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-os-release\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.963278 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-socket-dir-parent\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.963464 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mrhv\" (UniqueName: \"kubernetes.io/projected/c4621882-3d98-4910-9263-5959d2302427-kube-api-access-7mrhv\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.963524 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.963647 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vwdx\" (UniqueName: \"kubernetes.io/projected/d0a3643f-fbed-4614-a9cb-87b71148c273-kube-api-access-2vwdx\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.963692 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-cni-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.963731 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-conf-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.964196 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-cni-multus\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.964542 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-kubelet\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.964705 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-cni-bin\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.972015 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.972008 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.975728 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.978349 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.982130 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.982190 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.982207 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.982541 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.986463 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.987235 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.987279 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.987299 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.987321 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.987337 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:52Z","lastTransitionTime":"2025-12-08T17:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.990444 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:52 crc kubenswrapper[5113]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:52 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:52 crc kubenswrapper[5113]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 17:41:52 crc kubenswrapper[5113]: source /etc/kubernetes/apiserver-url.env Dec 08 17:41:52 crc kubenswrapper[5113]: else Dec 08 17:41:52 crc kubenswrapper[5113]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 17:41:52 crc kubenswrapper[5113]: exit 1 Dec 08 17:41:52 crc kubenswrapper[5113]: fi Dec 08 17:41:52 crc kubenswrapper[5113]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 17:41:52 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:52 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.991614 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 17:41:52 crc kubenswrapper[5113]: E1208 17:41:52.997901 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:52 crc kubenswrapper[5113]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:52 crc kubenswrapper[5113]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:52 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:52 crc kubenswrapper[5113]: source "/env/_master" Dec 08 17:41:52 crc kubenswrapper[5113]: set +o allexport Dec 08 17:41:52 crc kubenswrapper[5113]: fi Dec 08 17:41:52 crc kubenswrapper[5113]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 17:41:52 crc kubenswrapper[5113]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 17:41:52 crc kubenswrapper[5113]: ho_enable="--enable-hybrid-overlay" Dec 08 17:41:52 crc kubenswrapper[5113]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 17:41:52 crc kubenswrapper[5113]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 17:41:52 crc kubenswrapper[5113]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 17:41:52 crc kubenswrapper[5113]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:52 crc kubenswrapper[5113]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 17:41:52 crc kubenswrapper[5113]: --webhook-host=127.0.0.1 \ Dec 08 17:41:52 crc kubenswrapper[5113]: --webhook-port=9743 \ Dec 08 17:41:52 crc kubenswrapper[5113]: ${ho_enable} \ Dec 08 17:41:52 crc kubenswrapper[5113]: --enable-interconnect \ Dec 08 17:41:52 crc kubenswrapper[5113]: --disable-approver \ Dec 08 17:41:52 crc kubenswrapper[5113]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 17:41:52 crc kubenswrapper[5113]: --wait-for-kubernetes-api=200s \ Dec 08 17:41:52 crc kubenswrapper[5113]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 17:41:52 crc kubenswrapper[5113]: --loglevel="${LOGLEVEL}" Dec 08 17:41:52 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:52 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:52 crc kubenswrapper[5113]: I1208 17:41:52.999423 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88405869-34c6-458b-ab82-663f9a965335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k6xbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.000286 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.003337 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: source "/env/_master" Dec 08 17:41:53 crc kubenswrapper[5113]: set +o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 17:41:53 crc kubenswrapper[5113]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:53 crc kubenswrapper[5113]: --disable-webhook \ Dec 08 17:41:53 crc kubenswrapper[5113]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --loglevel="${LOGLEVEL}" Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.003430 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.003537 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.003479 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.004700 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.005122 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.008855 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.009156 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.013057 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g9mkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4621882-3d98-4910-9263-5959d2302427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mrhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g9mkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.023280 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.030550 5113 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.034975 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.042646 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"147cc4fec17d95d14fb0f000bc1fb333fc1b656448c658259abf2407ee59caa5"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.043822 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"ba5a71e972bbc1476d383d6dc30a55d2586fe6a1f9710c0415a018dd31e844c7"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.044177 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.044614 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: source "/env/_master" Dec 08 17:41:53 crc kubenswrapper[5113]: set +o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 17:41:53 crc kubenswrapper[5113]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 17:41:53 crc kubenswrapper[5113]: ho_enable="--enable-hybrid-overlay" Dec 08 17:41:53 crc kubenswrapper[5113]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 17:41:53 crc kubenswrapper[5113]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 17:41:53 crc kubenswrapper[5113]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 17:41:53 crc kubenswrapper[5113]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:53 crc kubenswrapper[5113]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --webhook-host=127.0.0.1 \ Dec 08 17:41:53 crc kubenswrapper[5113]: --webhook-port=9743 \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${ho_enable} \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-interconnect \ Dec 08 17:41:53 crc kubenswrapper[5113]: --disable-approver \ Dec 08 17:41:53 crc kubenswrapper[5113]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --wait-for-kubernetes-api=200s \ Dec 08 17:41:53 crc kubenswrapper[5113]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --loglevel="${LOGLEVEL}" Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.045349 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:53 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: source /etc/kubernetes/apiserver-url.env Dec 08 17:41:53 crc kubenswrapper[5113]: else Dec 08 17:41:53 crc kubenswrapper[5113]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 17:41:53 crc kubenswrapper[5113]: exit 1 Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.046425 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.047075 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: source "/env/_master" Dec 08 17:41:53 crc kubenswrapper[5113]: set +o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 17:41:53 crc kubenswrapper[5113]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 17:41:53 crc kubenswrapper[5113]: --disable-webhook \ Dec 08 17:41:53 crc kubenswrapper[5113]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --loglevel="${LOGLEVEL}" Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.048887 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.054766 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.063918 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-ld988" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d90afc7e-e255-4843-b19d-3ab9233e2024\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlmst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ld988\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065586 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065651 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065688 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065723 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065761 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065793 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065827 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065858 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065926 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.065962 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066011 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066070 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066105 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066141 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066185 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066218 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066249 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066280 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066311 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066393 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066429 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066458 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066475 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066557 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066590 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066615 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.066641 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.067545 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068073 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068071 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068131 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068112 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068262 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068301 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068338 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068373 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068402 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068426 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068454 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068479 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068504 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068527 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068550 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068576 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068600 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068631 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068657 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068694 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068708 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068717 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068825 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068861 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068888 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069026 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069280 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069422 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069465 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069514 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069544 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069572 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.068525 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.070115 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.070202 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.070366 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.070554 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.070693 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.070727 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.070945 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.071156 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.071179 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.071335 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.071429 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.071749 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.071937 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072172 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072329 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.069594 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072470 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072483 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072541 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072568 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072614 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072618 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072639 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072689 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072716 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072788 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072817 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072870 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072896 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072943 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072971 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073011 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073077 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073102 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073156 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073204 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073269 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073314 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073339 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073364 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073410 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073437 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073486 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073509 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073555 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073585 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073606 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073651 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073674 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073722 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073746 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073770 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073817 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073841 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073889 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073913 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073958 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073990 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074048 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074078 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074131 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074159 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074211 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074240 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074288 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074312 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074333 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074378 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074405 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074453 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074478 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074525 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082074 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082140 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082183 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082232 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082271 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082307 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082340 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082382 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082414 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082450 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.085244 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072780 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.085481 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.072787 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073118 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073229 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073407 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073471 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073496 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.073723 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074023 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074075 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074136 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074127 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074405 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.074433 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.082901 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.083253 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.083409 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.083422 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.083423 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.083518 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.083949 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.083991 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.084057 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.084074 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.084275 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.084689 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.084953 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.084889 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.085252 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.084350 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.085554 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.085937 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.086223 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.086474 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.086737 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.086747 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.086845 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.087176 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.087360 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.086638 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150992a3-efc5-4dc2-a696-390ea843f8c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pjxmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.087686 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.087722 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.086950 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.087939 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.088077 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.088690 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.088828 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.089257 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.089656 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.089942 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.090005 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.089196 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.081563 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.090192 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.090528 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.091297 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.091689 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.091758 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092443 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.091804 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092596 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092684 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092738 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092790 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092834 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092883 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092928 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.092984 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093011 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093054 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093073 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093096 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093104 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093131 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093168 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093203 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093241 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093276 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093314 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093345 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093379 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093413 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093447 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093478 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093506 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093537 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093576 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093603 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093633 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093663 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093693 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093719 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093767 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093813 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.094720 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.095189 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.095951 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.096010 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.096075 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.096116 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.096317 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097736 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097795 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097820 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097844 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097865 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097914 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097972 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097998 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098021 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098066 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098094 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098116 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098136 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098269 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098337 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098496 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098738 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098812 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098842 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098870 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098895 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098923 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098954 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098980 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099010 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099056 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099084 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099112 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099137 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099229 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099272 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099299 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099329 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099357 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099384 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093106 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099412 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099445 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099457 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099469 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099476 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099501 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099530 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099559 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099584 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099655 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099688 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093409 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093417 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093463 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093727 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.093782 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.094086 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.094537 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.094801 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.094893 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.095100 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.095674 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.095862 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.096013 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.096882 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097395 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097438 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097513 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097583 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097586 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.097611 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098373 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098410 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.098620 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.099587 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100441 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100474 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100497 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100527 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100548 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100571 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100593 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100616 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100637 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100658 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100679 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100699 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100723 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100741 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100762 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100786 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100813 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100841 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100862 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100894 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100924 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100950 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.100975 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101054 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-cni-multus\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101119 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-os-release\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101140 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cni-binary-copy\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101180 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-env-overrides\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101202 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-kubelet\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101222 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cnibin\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101240 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-kubelet\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101257 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-netns\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101276 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d90afc7e-e255-4843-b19d-3ab9233e2024-hosts-file\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101299 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-cni-bin\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101318 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-etc-kubernetes\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101335 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101359 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqg2d\" (UniqueName: \"kubernetes.io/projected/150992a3-efc5-4dc2-a696-390ea843f8c4-kube-api-access-xqg2d\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101377 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d90afc7e-e255-4843-b19d-3ab9233e2024-tmp-dir\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101397 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52658507-b084-49cb-a694-f012d44ccc82-mcd-auth-proxy-config\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101426 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4621882-3d98-4910-9263-5959d2302427-cni-binary-copy\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101445 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-hostroot\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101463 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmst\" (UniqueName: \"kubernetes.io/projected/d90afc7e-e255-4843-b19d-3ab9233e2024-kube-api-access-tlmst\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101497 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-k8s-cni-cncf-io\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101515 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-log-socket\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101536 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-cnibin\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101554 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-multus-certs\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101573 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-system-cni-dir\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101590 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-systemd-units\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101608 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-var-lib-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101626 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/150992a3-efc5-4dc2-a696-390ea843f8c4-ovn-node-metrics-cert\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101654 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101671 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-node-log\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101689 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dff1756e-0a6c-408f-9d31-c7cc88d1d970-serviceca\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101711 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-system-cni-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101730 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4621882-3d98-4910-9263-5959d2302427-multus-daemon-config\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101747 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-netns\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101764 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-ovn\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.101782 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dff1756e-0a6c-408f-9d31-c7cc88d1d970-host\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102131 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-etc-kubernetes\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102572 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5g6p\" (UniqueName: \"kubernetes.io/projected/52658507-b084-49cb-a694-f012d44ccc82-kube-api-access-h5g6p\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102656 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102690 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102785 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-os-release\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102822 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-socket-dir-parent\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102853 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrhv\" (UniqueName: \"kubernetes.io/projected/c4621882-3d98-4910-9263-5959d2302427-kube-api-access-7mrhv\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102883 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102910 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-systemd\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.102949 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6j4f\" (UniqueName: \"kubernetes.io/projected/053be0da-d1f2-46d1-83b1-c9135f5c3c61-kube-api-access-l6j4f\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.103265 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4621882-3d98-4910-9263-5959d2302427-cni-binary-copy\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.104843 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.604802933 +0000 UTC m=+79.320596069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.104901 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-cni-bin\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.104937 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-cni-multus\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.104928 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-etc-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.104961 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-hostroot\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.104989 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-socket-dir-parent\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.104993 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-ovn-kubernetes\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105022 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-var-lib-kubelet\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105059 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-k8s-cni-cncf-io\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105104 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-multus-certs\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105133 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-os-release\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105162 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-cnibin\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105189 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-host-run-netns\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.105215 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105450 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105763 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105847 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105107 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-bin\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.105979 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-netd\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.106121 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs podName:d0a3643f-fbed-4614-a9cb-87b71148c273 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:53.606098502 +0000 UTC m=+79.321891618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs") pod "network-metrics-daemon-bc5j2" (UID: "d0a3643f-fbed-4614-a9cb-87b71148c273") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.106300 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-script-lib\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.106423 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.106707 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.106870 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107049 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107166 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4621882-3d98-4910-9263-5959d2302427-multus-daemon-config\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107163 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107283 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107294 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107498 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107544 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-slash\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107566 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dcsx\" (UniqueName: \"kubernetes.io/projected/dff1756e-0a6c-408f-9d31-c7cc88d1d970-kube-api-access-2dcsx\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107699 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107783 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.107965 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108022 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108194 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108551 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108720 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108718 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108935 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108818 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.109004 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.109007 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.109205 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.109439 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.109670 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.110301 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.108121 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.110883 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-system-cni-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111084 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111347 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111514 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2vwdx\" (UniqueName: \"kubernetes.io/projected/d0a3643f-fbed-4614-a9cb-87b71148c273-kube-api-access-2vwdx\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111585 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-config\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111718 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/52658507-b084-49cb-a694-f012d44ccc82-rootfs\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111747 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52658507-b084-49cb-a694-f012d44ccc82-proxy-tls\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111805 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-cni-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111891 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-conf-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111897 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-cni-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.111941 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4621882-3d98-4910-9263-5959d2302427-multus-conf-dir\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.112071 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.112374 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.112411 5113 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.112426 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.112498 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.112561 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113125 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113178 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113305 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113707 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113833 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114107 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114143 5113 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114156 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114172 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114185 5113 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114197 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113772 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114229 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113826 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114136 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114143 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.113757 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114254 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114289 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114310 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114313 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114321 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114344 5113 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114361 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114375 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114386 5113 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114396 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114407 5113 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114525 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114539 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114550 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114564 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114576 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114588 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114601 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114613 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114628 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114630 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114641 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114657 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114669 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114680 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114859 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114871 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114881 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114893 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114931 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114946 5113 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114956 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114967 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114998 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115009 5113 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115019 5113 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115029 5113 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116107 5113 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116123 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116139 5113 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116150 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116159 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116170 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116179 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116190 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116203 5113 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116213 5113 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116224 5113 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116238 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116252 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116265 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116278 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116292 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116305 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116316 5113 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116326 5113 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116336 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116346 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116356 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116366 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116375 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116388 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116398 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116410 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116419 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116428 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116440 5113 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116456 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116471 5113 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116484 5113 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116494 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116503 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116516 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116530 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116543 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116556 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116569 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116582 5113 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116596 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116606 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116619 5113 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116632 5113 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116644 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116683 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116699 5113 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116791 5113 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116807 5113 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.114781 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115372 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115398 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115408 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115734 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115901 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115898 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115929 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116142 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116388 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116941 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116544 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116980 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117006 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117019 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117084 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117103 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117146 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117156 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117165 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117177 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117186 5113 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117214 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117225 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117236 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117247 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117259 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117269 5113 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117279 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117288 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117314 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116724 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.115600 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.116794 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117104 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117274 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117472 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117574 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117701 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117733 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117822 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117853 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117871 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0a3643f-fbed-4614-a9cb-87b71148c273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bc5j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.117922 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.118258 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.120832 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.121012 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.121284 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.121310 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.121408 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.121648 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.122228 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.123905 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.124444 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.125307 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.125698 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.126769 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.127009 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.126520 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.127253 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.127554 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.127612 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mrhv\" (UniqueName: \"kubernetes.io/projected/c4621882-3d98-4910-9263-5959d2302427-kube-api-access-7mrhv\") pod \"multus-g9mkp\" (UID: \"c4621882-3d98-4910-9263-5959d2302427\") " pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.128003 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.129261 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.129329 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.129353 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.129806 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.130252 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52658507-b084-49cb-a694-f012d44ccc82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf4d4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.130790 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vwdx\" (UniqueName: \"kubernetes.io/projected/d0a3643f-fbed-4614-a9cb-87b71148c273-kube-api-access-2vwdx\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.131424 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.131513 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.131922 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.131933 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.131921 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.132310 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.132325 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.132367 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.133017 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.133026 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.133152 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.133231 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.133823 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.133902 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.134001 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.135765 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.139719 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.140052 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.140423 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.142002 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.142861 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.143066 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.152135 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0a3643f-fbed-4614-a9cb-87b71148c273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bc5j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.156646 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.162751 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.162939 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.163228 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52658507-b084-49cb-a694-f012d44ccc82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf4d4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.165349 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.172651 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.172897 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.173280 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.173559 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.173769 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.177604 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.182944 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88405869-34c6-458b-ab82-663f9a965335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k6xbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.186732 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.187983 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.192708 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g9mkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4621882-3d98-4910-9263-5959d2302427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mrhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g9mkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.199100 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dff1756e-0a6c-408f-9d31-c7cc88d1d970\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dcsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.201188 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.201224 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.201235 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.201254 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.201269 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.215793 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc63f51f-8fdb-44a8-bdff-ec60915754d9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e1ec5e3621120e1d45d214b07ea9461d74b8876f2ecb753c9cb64edceb6e9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://32d2671acebdb7c9bf493978733f31bdc688b2c39538d6accbde1f8acb545ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a976ae78e4000d634904dacd9850a4ef4b1a8f8466096b6d6a1a81bb1509d028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d08d6cf52478608ef265a49b4a56ce194ac8e56196751c94c2a0d8811c6fd23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://897669035e32774ca5030c245e526d1f4a891d11bf807b707598ca43dba686f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218112 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-os-release\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218154 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cni-binary-copy\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218215 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-env-overrides\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218242 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cnibin\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218279 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-os-release\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218400 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-kubelet\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218508 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-kubelet\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218566 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cnibin\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218608 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-netns\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218648 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-netns\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218680 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d90afc7e-e255-4843-b19d-3ab9233e2024-hosts-file\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218770 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218824 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d90afc7e-e255-4843-b19d-3ab9233e2024-hosts-file\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218864 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xqg2d\" (UniqueName: \"kubernetes.io/projected/150992a3-efc5-4dc2-a696-390ea843f8c4-kube-api-access-xqg2d\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218898 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218914 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-env-overrides\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218938 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d90afc7e-e255-4843-b19d-3ab9233e2024-tmp-dir\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218981 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cni-binary-copy\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.218994 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52658507-b084-49cb-a694-f012d44ccc82-mcd-auth-proxy-config\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219087 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmst\" (UniqueName: \"kubernetes.io/projected/d90afc7e-e255-4843-b19d-3ab9233e2024-kube-api-access-tlmst\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219173 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-log-socket\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219255 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-system-cni-dir\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219427 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-systemd-units\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219460 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-systemd-units\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219425 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-system-cni-dir\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219424 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-log-socket\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219471 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d90afc7e-e255-4843-b19d-3ab9233e2024-tmp-dir\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219510 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-var-lib-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219547 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-var-lib-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219549 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/150992a3-efc5-4dc2-a696-390ea843f8c4-ovn-node-metrics-cert\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219621 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219646 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-node-log\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219696 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dff1756e-0a6c-408f-9d31-c7cc88d1d970-serviceca\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219768 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-ovn\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219794 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dff1756e-0a6c-408f-9d31-c7cc88d1d970-host\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219817 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h5g6p\" (UniqueName: \"kubernetes.io/projected/52658507-b084-49cb-a694-f012d44ccc82-kube-api-access-h5g6p\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219846 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52658507-b084-49cb-a694-f012d44ccc82-mcd-auth-proxy-config\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.219989 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dff1756e-0a6c-408f-9d31-c7cc88d1d970-host\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220011 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-ovn\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220083 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220362 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220430 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220185 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/053be0da-d1f2-46d1-83b1-c9135f5c3c61-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220577 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-systemd\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220712 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6j4f\" (UniqueName: \"kubernetes.io/projected/053be0da-d1f2-46d1-83b1-c9135f5c3c61-kube-api-access-l6j4f\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220811 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-etc-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220882 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-ovn-kubernetes\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220952 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-bin\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.221459 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-bin\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220198 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-node-log\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.221599 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/dff1756e-0a6c-408f-9d31-c7cc88d1d970-serviceca\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.220672 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-systemd\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.221644 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.221700 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-etc-openvswitch\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.221739 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-ovn-kubernetes\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.222693 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-netd\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.222766 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-script-lib\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223446 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-netd\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223449 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223523 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-slash\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223546 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2dcsx\" (UniqueName: \"kubernetes.io/projected/dff1756e-0a6c-408f-9d31-c7cc88d1d970-kube-api-access-2dcsx\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223644 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-slash\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223699 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-config\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223723 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/52658507-b084-49cb-a694-f012d44ccc82-rootfs\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223741 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52658507-b084-49cb-a694-f012d44ccc82-proxy-tls\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223832 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223842 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223851 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223861 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223872 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223880 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223910 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223920 5113 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223929 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223938 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.223947 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224012 5113 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224021 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224071 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224105 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224116 5113 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224124 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224133 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224159 5113 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224196 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224207 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224216 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224227 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224236 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224247 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224281 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224307 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224320 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224333 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224346 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224359 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224373 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224387 5113 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224402 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224416 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224429 5113 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224440 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224452 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224464 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224475 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224485 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224496 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224496 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/053be0da-d1f2-46d1-83b1-c9135f5c3c61-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224509 5113 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224499 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-script-lib\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224521 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224533 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224544 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224556 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224566 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224576 5113 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224580 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/52658507-b084-49cb-a694-f012d44ccc82-rootfs\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224586 5113 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224608 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224623 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224637 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224650 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224660 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224670 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224700 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224712 5113 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224723 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224735 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224747 5113 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224759 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224771 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224781 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/150992a3-efc5-4dc2-a696-390ea843f8c4-ovn-node-metrics-cert\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224782 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224881 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224931 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224946 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224957 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224967 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224977 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224988 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.224999 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225010 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225021 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225046 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225064 5113 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225079 5113 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225092 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225105 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225116 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225127 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225138 5113 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225148 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225158 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225065 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-config\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225168 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225244 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225261 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225276 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225289 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225300 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225313 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225327 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225338 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225348 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225360 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225371 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225385 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225396 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225409 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225424 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225440 5113 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225451 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225462 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225473 5113 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225486 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225497 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.225509 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.226464 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.230880 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52658507-b084-49cb-a694-f012d44ccc82-proxy-tls\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.235090 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5g6p\" (UniqueName: \"kubernetes.io/projected/52658507-b084-49cb-a694-f012d44ccc82-kube-api-access-h5g6p\") pod \"machine-config-daemon-mf4d4\" (UID: \"52658507-b084-49cb-a694-f012d44ccc82\") " pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.235588 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqg2d\" (UniqueName: \"kubernetes.io/projected/150992a3-efc5-4dc2-a696-390ea843f8c4-kube-api-access-xqg2d\") pod \"ovnkube-node-pjxmr\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.236270 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmst\" (UniqueName: \"kubernetes.io/projected/d90afc7e-e255-4843-b19d-3ab9233e2024-kube-api-access-tlmst\") pod \"node-resolver-ld988\" (UID: \"d90afc7e-e255-4843-b19d-3ab9233e2024\") " pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.236913 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6j4f\" (UniqueName: \"kubernetes.io/projected/053be0da-d1f2-46d1-83b1-c9135f5c3c61-kube-api-access-l6j4f\") pod \"multus-additional-cni-plugins-rzvvg\" (UID: \"053be0da-d1f2-46d1-83b1-c9135f5c3c61\") " pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.240310 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dcsx\" (UniqueName: \"kubernetes.io/projected/dff1756e-0a6c-408f-9d31-c7cc88d1d970-kube-api-access-2dcsx\") pod \"node-ca-jcdp7\" (UID: \"dff1756e-0a6c-408f-9d31-c7cc88d1d970\") " pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.244500 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.245626 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g9mkp" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.254830 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:41:53 crc kubenswrapper[5113]: W1208 17:41:53.255898 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4621882_3d98_4910_9263_5959d2302427.slice/crio-7a1f0128a9f342dc9e9af14d8a58427f470a98de97ee1ebac3c2e4a0fd3bb557 WatchSource:0}: Error finding container 7a1f0128a9f342dc9e9af14d8a58427f470a98de97ee1ebac3c2e4a0fd3bb557: Status 404 returned error can't find the container with id 7a1f0128a9f342dc9e9af14d8a58427f470a98de97ee1ebac3c2e4a0fd3bb557 Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.257206 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 17:41:53 crc kubenswrapper[5113]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 17:41:53 crc kubenswrapper[5113]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mrhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-g9mkp_openshift-multus(c4621882-3d98-4910-9263-5959d2302427): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.258513 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-g9mkp" podUID="c4621882-3d98-4910-9263-5959d2302427" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.259617 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ld988" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.267608 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.272789 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.279929 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5g6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf4d4_openshift-machine-config-operator(52658507-b084-49cb-a694-f012d44ccc82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.283923 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5g6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf4d4_openshift-machine-config-operator(52658507-b084-49cb-a694-f012d44ccc82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.285291 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.285781 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053be0da-d1f2-46d1-83b1-c9135f5c3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rzvvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.292211 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:53 crc kubenswrapper[5113]: set -uo pipefail Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 17:41:53 crc kubenswrapper[5113]: HOSTS_FILE="/etc/hosts" Dec 08 17:41:53 crc kubenswrapper[5113]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: # Make a temporary file with the old hosts file's attributes. Dec 08 17:41:53 crc kubenswrapper[5113]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 17:41:53 crc kubenswrapper[5113]: echo "Failed to preserve hosts file. Exiting." Dec 08 17:41:53 crc kubenswrapper[5113]: exit 1 Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: while true; do Dec 08 17:41:53 crc kubenswrapper[5113]: declare -A svc_ips Dec 08 17:41:53 crc kubenswrapper[5113]: for svc in "${services[@]}"; do Dec 08 17:41:53 crc kubenswrapper[5113]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 17:41:53 crc kubenswrapper[5113]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 17:41:53 crc kubenswrapper[5113]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 17:41:53 crc kubenswrapper[5113]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 17:41:53 crc kubenswrapper[5113]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:53 crc kubenswrapper[5113]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:53 crc kubenswrapper[5113]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:53 crc kubenswrapper[5113]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 17:41:53 crc kubenswrapper[5113]: for i in ${!cmds[*]} Dec 08 17:41:53 crc kubenswrapper[5113]: do Dec 08 17:41:53 crc kubenswrapper[5113]: ips=($(eval "${cmds[i]}")) Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: svc_ips["${svc}"]="${ips[@]}" Dec 08 17:41:53 crc kubenswrapper[5113]: break Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: # Update /etc/hosts only if we get valid service IPs Dec 08 17:41:53 crc kubenswrapper[5113]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 17:41:53 crc kubenswrapper[5113]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 17:41:53 crc kubenswrapper[5113]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 17:41:53 crc kubenswrapper[5113]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 17:41:53 crc kubenswrapper[5113]: sleep 60 & wait Dec 08 17:41:53 crc kubenswrapper[5113]: continue Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: # Append resolver entries for services Dec 08 17:41:53 crc kubenswrapper[5113]: rc=0 Dec 08 17:41:53 crc kubenswrapper[5113]: for svc in "${!svc_ips[@]}"; do Dec 08 17:41:53 crc kubenswrapper[5113]: for ip in ${svc_ips[${svc}]}; do Dec 08 17:41:53 crc kubenswrapper[5113]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ $rc -ne 0 ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: sleep 60 & wait Dec 08 17:41:53 crc kubenswrapper[5113]: continue Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 17:41:53 crc kubenswrapper[5113]: # Replace /etc/hosts with our modified version if needed Dec 08 17:41:53 crc kubenswrapper[5113]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 17:41:53 crc kubenswrapper[5113]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: sleep 60 & wait Dec 08 17:41:53 crc kubenswrapper[5113]: unset svc_ips Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlmst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-ld988_openshift-dns(d90afc7e-e255-4843-b19d-3ab9233e2024): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.292276 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 17:41:53 crc kubenswrapper[5113]: apiVersion: v1 Dec 08 17:41:53 crc kubenswrapper[5113]: clusters: Dec 08 17:41:53 crc kubenswrapper[5113]: - cluster: Dec 08 17:41:53 crc kubenswrapper[5113]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 17:41:53 crc kubenswrapper[5113]: server: https://api-int.crc.testing:6443 Dec 08 17:41:53 crc kubenswrapper[5113]: name: default-cluster Dec 08 17:41:53 crc kubenswrapper[5113]: contexts: Dec 08 17:41:53 crc kubenswrapper[5113]: - context: Dec 08 17:41:53 crc kubenswrapper[5113]: cluster: default-cluster Dec 08 17:41:53 crc kubenswrapper[5113]: namespace: default Dec 08 17:41:53 crc kubenswrapper[5113]: user: default-auth Dec 08 17:41:53 crc kubenswrapper[5113]: name: default-context Dec 08 17:41:53 crc kubenswrapper[5113]: current-context: default-context Dec 08 17:41:53 crc kubenswrapper[5113]: kind: Config Dec 08 17:41:53 crc kubenswrapper[5113]: preferences: {} Dec 08 17:41:53 crc kubenswrapper[5113]: users: Dec 08 17:41:53 crc kubenswrapper[5113]: - name: default-auth Dec 08 17:41:53 crc kubenswrapper[5113]: user: Dec 08 17:41:53 crc kubenswrapper[5113]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:53 crc kubenswrapper[5113]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:53 crc kubenswrapper[5113]: EOF Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqg2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-pjxmr_openshift-ovn-kubernetes(150992a3-efc5-4dc2-a696-390ea843f8c4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.293749 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.293775 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-ld988" podUID="d90afc7e-e255-4843-b19d-3ab9233e2024" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.303295 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.303350 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.303367 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.303390 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.303408 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.303873 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.313597 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jcdp7" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.318080 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6j4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rzvvg_openshift-multus(053be0da-d1f2-46d1-83b1-c9135f5c3c61): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.319271 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" podUID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.321749 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31bd15ba-1b8e-4d1b-8529-0018d98eba91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5743acdcc3be9e6004ceda4b55d50dd3f70a0f644add23d30c9f195736b2f15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: W1208 17:41:53.323662 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddff1756e_0a6c_408f_9d31_c7cc88d1d970.slice/crio-b439abaeb4b67f17e8473e4b45ec617ff99c03f01ad089a4732c66ac0ace7ae2 WatchSource:0}: Error finding container b439abaeb4b67f17e8473e4b45ec617ff99c03f01ad089a4732c66ac0ace7ae2: Status 404 returned error can't find the container with id b439abaeb4b67f17e8473e4b45ec617ff99c03f01ad089a4732c66ac0ace7ae2 Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.325162 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 17:41:53 crc kubenswrapper[5113]: while [ true ]; Dec 08 17:41:53 crc kubenswrapper[5113]: do Dec 08 17:41:53 crc kubenswrapper[5113]: for f in $(ls /tmp/serviceca); do Dec 08 17:41:53 crc kubenswrapper[5113]: echo $f Dec 08 17:41:53 crc kubenswrapper[5113]: ca_file_path="/tmp/serviceca/${f}" Dec 08 17:41:53 crc kubenswrapper[5113]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 17:41:53 crc kubenswrapper[5113]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 17:41:53 crc kubenswrapper[5113]: if [ -e "${reg_dir_path}" ]; then Dec 08 17:41:53 crc kubenswrapper[5113]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:53 crc kubenswrapper[5113]: else Dec 08 17:41:53 crc kubenswrapper[5113]: mkdir $reg_dir_path Dec 08 17:41:53 crc kubenswrapper[5113]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: for d in $(ls /etc/docker/certs.d); do Dec 08 17:41:53 crc kubenswrapper[5113]: echo $d Dec 08 17:41:53 crc kubenswrapper[5113]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 17:41:53 crc kubenswrapper[5113]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 17:41:53 crc kubenswrapper[5113]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 17:41:53 crc kubenswrapper[5113]: rm -rf /etc/docker/certs.d/$d Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: sleep 60 & wait ${!} Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dcsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-jcdp7_openshift-image-registry(dff1756e-0a6c-408f-9d31-c7cc88d1d970): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.326269 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-jcdp7" podUID="dff1756e-0a6c-408f-9d31-c7cc88d1d970" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.326659 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.326699 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.326725 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.326774 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.326899 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.326921 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.326931 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.326979 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:54.326963863 +0000 UTC m=+80.042756979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327034 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327062 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327073 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327106 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:54.327095866 +0000 UTC m=+80.042888982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327194 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327248 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:54.327228459 +0000 UTC m=+80.043021575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327281 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.327385 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:54.327363912 +0000 UTC m=+80.043157028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.364199 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.404395 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.406218 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.406291 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.406317 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.406350 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.406376 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.428433 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.428611 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.429576 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.429618 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.430465 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.433884 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-k6xbq\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.447149 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-ld988" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d90afc7e-e255-4843-b19d-3ab9233e2024\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlmst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ld988\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.454603 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.454649 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.454663 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.454684 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.454701 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.466819 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.470100 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.470149 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.470162 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.470180 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.470193 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.480330 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.483586 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.483643 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.483657 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.483676 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.483691 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.491154 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150992a3-efc5-4dc2-a696-390ea843f8c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pjxmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.493155 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.496962 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.496994 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.497002 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.497016 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.497028 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.504336 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.507494 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.510582 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.510634 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.510651 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.510871 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.510902 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.517424 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:53 crc kubenswrapper[5113]: set -euo pipefail Dec 08 17:41:53 crc kubenswrapper[5113]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 17:41:53 crc kubenswrapper[5113]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 17:41:53 crc kubenswrapper[5113]: # As the secret mount is optional we must wait for the files to be present. Dec 08 17:41:53 crc kubenswrapper[5113]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 17:41:53 crc kubenswrapper[5113]: TS=$(date +%s) Dec 08 17:41:53 crc kubenswrapper[5113]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 17:41:53 crc kubenswrapper[5113]: HAS_LOGGED_INFO=0 Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: log_missing_certs(){ Dec 08 17:41:53 crc kubenswrapper[5113]: CUR_TS=$(date +%s) Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 17:41:53 crc kubenswrapper[5113]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 17:41:53 crc kubenswrapper[5113]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 17:41:53 crc kubenswrapper[5113]: HAS_LOGGED_INFO=1 Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: } Dec 08 17:41:53 crc kubenswrapper[5113]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 17:41:53 crc kubenswrapper[5113]: log_missing_certs Dec 08 17:41:53 crc kubenswrapper[5113]: sleep 5 Dec 08 17:41:53 crc kubenswrapper[5113]: done Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 17:41:53 crc kubenswrapper[5113]: exec /usr/bin/kube-rbac-proxy \ Dec 08 17:41:53 crc kubenswrapper[5113]: --logtostderr \ Dec 08 17:41:53 crc kubenswrapper[5113]: --secure-listen-address=:9108 \ Dec 08 17:41:53 crc kubenswrapper[5113]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 17:41:53 crc kubenswrapper[5113]: --upstream=http://127.0.0.1:29108/ \ Dec 08 17:41:53 crc kubenswrapper[5113]: --tls-private-key-file=${TLS_PK} \ Dec 08 17:41:53 crc kubenswrapper[5113]: --tls-cert-file=${TLS_CERT} Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8h7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k6xbq_openshift-ovn-kubernetes(88405869-34c6-458b-ab82-663f9a965335): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.519705 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:53 crc kubenswrapper[5113]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: source "/env/_master" Dec 08 17:41:53 crc kubenswrapper[5113]: set +o allexport Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v4_join_subnet_opt= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v6_join_subnet_opt= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v4_transit_switch_subnet_opt= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v6_transit_switch_subnet_opt= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: dns_name_resolver_enabled_flag= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: # This is needed so that converting clusters from GA to TP Dec 08 17:41:53 crc kubenswrapper[5113]: # will rollout control plane pods as well Dec 08 17:41:53 crc kubenswrapper[5113]: network_segmentation_enabled_flag= Dec 08 17:41:53 crc kubenswrapper[5113]: multi_network_enabled_flag= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "true" == "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "true" == "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "true" != "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: route_advertisements_enable_flag= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: preconfigured_udn_addresses_enable_flag= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 17:41:53 crc kubenswrapper[5113]: multi_network_policy_enabled_flag= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 17:41:53 crc kubenswrapper[5113]: admin_network_policy_enabled_flag= Dec 08 17:41:53 crc kubenswrapper[5113]: if [[ "true" == "true" ]]; then Dec 08 17:41:53 crc kubenswrapper[5113]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: if [ "shared" == "shared" ]; then Dec 08 17:41:53 crc kubenswrapper[5113]: gateway_mode_flags="--gateway-mode shared" Dec 08 17:41:53 crc kubenswrapper[5113]: elif [ "shared" == "local" ]; then Dec 08 17:41:53 crc kubenswrapper[5113]: gateway_mode_flags="--gateway-mode local" Dec 08 17:41:53 crc kubenswrapper[5113]: else Dec 08 17:41:53 crc kubenswrapper[5113]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 17:41:53 crc kubenswrapper[5113]: exit 1 Dec 08 17:41:53 crc kubenswrapper[5113]: fi Dec 08 17:41:53 crc kubenswrapper[5113]: Dec 08 17:41:53 crc kubenswrapper[5113]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 17:41:53 crc kubenswrapper[5113]: exec /usr/bin/ovnkube \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-interconnect \ Dec 08 17:41:53 crc kubenswrapper[5113]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 17:41:53 crc kubenswrapper[5113]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 17:41:53 crc kubenswrapper[5113]: --metrics-enable-pprof \ Dec 08 17:41:53 crc kubenswrapper[5113]: --metrics-enable-config-duration \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${ovn_v4_join_subnet_opt} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${ovn_v6_join_subnet_opt} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${dns_name_resolver_enabled_flag} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${persistent_ips_enabled_flag} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${multi_network_enabled_flag} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${network_segmentation_enabled_flag} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${gateway_mode_flags} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${route_advertisements_enable_flag} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-egress-ip=true \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-egress-firewall=true \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-egress-qos=true \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-egress-service=true \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-multicast \ Dec 08 17:41:53 crc kubenswrapper[5113]: --enable-multi-external-gateway=true \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${multi_network_policy_enabled_flag} \ Dec 08 17:41:53 crc kubenswrapper[5113]: ${admin_network_policy_enabled_flag} Dec 08 17:41:53 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8h7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k6xbq_openshift-ovn-kubernetes(88405869-34c6-458b-ab82-663f9a965335): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:53 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.520907 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" podUID="88405869-34c6-458b-ab82-663f9a965335" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.521270 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.521440 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.522563 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.522589 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.522598 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.522614 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.522624 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.522625 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee666ae-d4a8-4de9-9e11-93bcc0e98ef1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6553617f01079f332992f16cd1a257b9e090879e7f3081be7900e1e7d2ed55a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99c545c56fb91bdf227d9980b73132ba07deeb048b298061085b5ebee0385451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ff208ec35eb72507f5ed9a811469dc40ecc4ab248f6b69e4b2875dbc4c2001b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.624585 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.624633 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.624642 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.624656 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.624666 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.631247 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.631398 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.631498 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:54.631473762 +0000 UTC m=+80.347266908 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.631590 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: E1208 17:41:53.631679 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs podName:d0a3643f-fbed-4614-a9cb-87b71148c273 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:54.631659116 +0000 UTC m=+80.347452232 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs") pod "network-metrics-daemon-bc5j2" (UID: "d0a3643f-fbed-4614-a9cb-87b71148c273") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.727225 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.727270 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.727279 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.727293 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.727302 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.829774 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.829842 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.829858 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.829882 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.829897 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.932363 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.932427 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.932437 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.932504 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:53 crc kubenswrapper[5113]: I1208 17:41:53.932516 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:53Z","lastTransitionTime":"2025-12-08T17:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.034383 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.034435 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.034446 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.034466 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.034481 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.048260 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"bd5de27a5a3deaeb9d36de7de1ec65c20922929eeca215d274ef8ac2a9643bd2"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.049147 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g9mkp" event={"ID":"c4621882-3d98-4910-9263-5959d2302427","Type":"ContainerStarted","Data":"7a1f0128a9f342dc9e9af14d8a58427f470a98de97ee1ebac3c2e4a0fd3bb557"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.050286 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ld988" event={"ID":"d90afc7e-e255-4843-b19d-3ab9233e2024","Type":"ContainerStarted","Data":"96ab5e0e39ad0bd10b28d91190c39e879967db40983ab4f61199a1e4f5851d32"} Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.050597 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5113]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 17:41:54 crc kubenswrapper[5113]: apiVersion: v1 Dec 08 17:41:54 crc kubenswrapper[5113]: clusters: Dec 08 17:41:54 crc kubenswrapper[5113]: - cluster: Dec 08 17:41:54 crc kubenswrapper[5113]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 17:41:54 crc kubenswrapper[5113]: server: https://api-int.crc.testing:6443 Dec 08 17:41:54 crc kubenswrapper[5113]: name: default-cluster Dec 08 17:41:54 crc kubenswrapper[5113]: contexts: Dec 08 17:41:54 crc kubenswrapper[5113]: - context: Dec 08 17:41:54 crc kubenswrapper[5113]: cluster: default-cluster Dec 08 17:41:54 crc kubenswrapper[5113]: namespace: default Dec 08 17:41:54 crc kubenswrapper[5113]: user: default-auth Dec 08 17:41:54 crc kubenswrapper[5113]: name: default-context Dec 08 17:41:54 crc kubenswrapper[5113]: current-context: default-context Dec 08 17:41:54 crc kubenswrapper[5113]: kind: Config Dec 08 17:41:54 crc kubenswrapper[5113]: preferences: {} Dec 08 17:41:54 crc kubenswrapper[5113]: users: Dec 08 17:41:54 crc kubenswrapper[5113]: - name: default-auth Dec 08 17:41:54 crc kubenswrapper[5113]: user: Dec 08 17:41:54 crc kubenswrapper[5113]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:54 crc kubenswrapper[5113]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 17:41:54 crc kubenswrapper[5113]: EOF Dec 08 17:41:54 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqg2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-pjxmr_openshift-ovn-kubernetes(150992a3-efc5-4dc2-a696-390ea843f8c4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.050981 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5113]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 17:41:54 crc kubenswrapper[5113]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 17:41:54 crc kubenswrapper[5113]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mrhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-g9mkp_openshift-multus(c4621882-3d98-4910-9263-5959d2302427): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.051271 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jcdp7" event={"ID":"dff1756e-0a6c-408f-9d31-c7cc88d1d970","Type":"ContainerStarted","Data":"b439abaeb4b67f17e8473e4b45ec617ff99c03f01ad089a4732c66ac0ace7ae2"} Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.051688 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.052212 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-g9mkp" podUID="c4621882-3d98-4910-9263-5959d2302427" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.052396 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5113]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:54 crc kubenswrapper[5113]: set -uo pipefail Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 17:41:54 crc kubenswrapper[5113]: HOSTS_FILE="/etc/hosts" Dec 08 17:41:54 crc kubenswrapper[5113]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: # Make a temporary file with the old hosts file's attributes. Dec 08 17:41:54 crc kubenswrapper[5113]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 17:41:54 crc kubenswrapper[5113]: echo "Failed to preserve hosts file. Exiting." Dec 08 17:41:54 crc kubenswrapper[5113]: exit 1 Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: while true; do Dec 08 17:41:54 crc kubenswrapper[5113]: declare -A svc_ips Dec 08 17:41:54 crc kubenswrapper[5113]: for svc in "${services[@]}"; do Dec 08 17:41:54 crc kubenswrapper[5113]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 17:41:54 crc kubenswrapper[5113]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 17:41:54 crc kubenswrapper[5113]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 17:41:54 crc kubenswrapper[5113]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 17:41:54 crc kubenswrapper[5113]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:54 crc kubenswrapper[5113]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:54 crc kubenswrapper[5113]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 17:41:54 crc kubenswrapper[5113]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 17:41:54 crc kubenswrapper[5113]: for i in ${!cmds[*]} Dec 08 17:41:54 crc kubenswrapper[5113]: do Dec 08 17:41:54 crc kubenswrapper[5113]: ips=($(eval "${cmds[i]}")) Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: svc_ips["${svc}"]="${ips[@]}" Dec 08 17:41:54 crc kubenswrapper[5113]: break Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: # Update /etc/hosts only if we get valid service IPs Dec 08 17:41:54 crc kubenswrapper[5113]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 17:41:54 crc kubenswrapper[5113]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 17:41:54 crc kubenswrapper[5113]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 17:41:54 crc kubenswrapper[5113]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 17:41:54 crc kubenswrapper[5113]: sleep 60 & wait Dec 08 17:41:54 crc kubenswrapper[5113]: continue Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: # Append resolver entries for services Dec 08 17:41:54 crc kubenswrapper[5113]: rc=0 Dec 08 17:41:54 crc kubenswrapper[5113]: for svc in "${!svc_ips[@]}"; do Dec 08 17:41:54 crc kubenswrapper[5113]: for ip in ${svc_ips[${svc}]}; do Dec 08 17:41:54 crc kubenswrapper[5113]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ $rc -ne 0 ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: sleep 60 & wait Dec 08 17:41:54 crc kubenswrapper[5113]: continue Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 17:41:54 crc kubenswrapper[5113]: # Replace /etc/hosts with our modified version if needed Dec 08 17:41:54 crc kubenswrapper[5113]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 17:41:54 crc kubenswrapper[5113]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: sleep 60 & wait Dec 08 17:41:54 crc kubenswrapper[5113]: unset svc_ips Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlmst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-ld988_openshift-dns(d90afc7e-e255-4843-b19d-3ab9233e2024): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.052476 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5113]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 17:41:54 crc kubenswrapper[5113]: while [ true ]; Dec 08 17:41:54 crc kubenswrapper[5113]: do Dec 08 17:41:54 crc kubenswrapper[5113]: for f in $(ls /tmp/serviceca); do Dec 08 17:41:54 crc kubenswrapper[5113]: echo $f Dec 08 17:41:54 crc kubenswrapper[5113]: ca_file_path="/tmp/serviceca/${f}" Dec 08 17:41:54 crc kubenswrapper[5113]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 17:41:54 crc kubenswrapper[5113]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 17:41:54 crc kubenswrapper[5113]: if [ -e "${reg_dir_path}" ]; then Dec 08 17:41:54 crc kubenswrapper[5113]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:54 crc kubenswrapper[5113]: else Dec 08 17:41:54 crc kubenswrapper[5113]: mkdir $reg_dir_path Dec 08 17:41:54 crc kubenswrapper[5113]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: for d in $(ls /etc/docker/certs.d); do Dec 08 17:41:54 crc kubenswrapper[5113]: echo $d Dec 08 17:41:54 crc kubenswrapper[5113]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 17:41:54 crc kubenswrapper[5113]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 17:41:54 crc kubenswrapper[5113]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 17:41:54 crc kubenswrapper[5113]: rm -rf /etc/docker/certs.d/$d Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: sleep 60 & wait ${!} Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dcsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-jcdp7_openshift-image-registry(dff1756e-0a6c-408f-9d31-c7cc88d1d970): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.053175 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"90b580e0ef233365f12159b2e4362c9d98c7040ccded0eb838f645d84d3d73b1"} Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.053650 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-ld988" podUID="d90afc7e-e255-4843-b19d-3ab9233e2024" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.053681 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-jcdp7" podUID="dff1756e-0a6c-408f-9d31-c7cc88d1d970" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.054015 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.054361 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" event={"ID":"88405869-34c6-458b-ab82-663f9a965335","Type":"ContainerStarted","Data":"279975528b5dd4bc1a6e47833b55d0e841546efddd5d2db6ec25ff2c8fa4b017"} Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.055155 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.055884 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5113]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 17:41:54 crc kubenswrapper[5113]: set -euo pipefail Dec 08 17:41:54 crc kubenswrapper[5113]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 17:41:54 crc kubenswrapper[5113]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 17:41:54 crc kubenswrapper[5113]: # As the secret mount is optional we must wait for the files to be present. Dec 08 17:41:54 crc kubenswrapper[5113]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 17:41:54 crc kubenswrapper[5113]: TS=$(date +%s) Dec 08 17:41:54 crc kubenswrapper[5113]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 17:41:54 crc kubenswrapper[5113]: HAS_LOGGED_INFO=0 Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: log_missing_certs(){ Dec 08 17:41:54 crc kubenswrapper[5113]: CUR_TS=$(date +%s) Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 17:41:54 crc kubenswrapper[5113]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 17:41:54 crc kubenswrapper[5113]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 17:41:54 crc kubenswrapper[5113]: HAS_LOGGED_INFO=1 Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: } Dec 08 17:41:54 crc kubenswrapper[5113]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 17:41:54 crc kubenswrapper[5113]: log_missing_certs Dec 08 17:41:54 crc kubenswrapper[5113]: sleep 5 Dec 08 17:41:54 crc kubenswrapper[5113]: done Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 17:41:54 crc kubenswrapper[5113]: exec /usr/bin/kube-rbac-proxy \ Dec 08 17:41:54 crc kubenswrapper[5113]: --logtostderr \ Dec 08 17:41:54 crc kubenswrapper[5113]: --secure-listen-address=:9108 \ Dec 08 17:41:54 crc kubenswrapper[5113]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 17:41:54 crc kubenswrapper[5113]: --upstream=http://127.0.0.1:29108/ \ Dec 08 17:41:54 crc kubenswrapper[5113]: --tls-private-key-file=${TLS_PK} \ Dec 08 17:41:54 crc kubenswrapper[5113]: --tls-cert-file=${TLS_CERT} Dec 08 17:41:54 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8h7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k6xbq_openshift-ovn-kubernetes(88405869-34c6-458b-ab82-663f9a965335): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.055921 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerStarted","Data":"895e1859cf3fed8fc4143b2986aaaac6301f6f80c4fba043c4e633ff4a4dfc5a"} Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.057195 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6j4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rzvvg_openshift-multus(053be0da-d1f2-46d1-83b1-c9135f5c3c61): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.057567 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"8e9714c276c0197523bafc7767ca145a94ec405b87b63634e3783dd08b8efaaa"} Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.058291 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" podUID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.058330 5113 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 17:41:54 crc kubenswrapper[5113]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ -f "/env/_master" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: set -o allexport Dec 08 17:41:54 crc kubenswrapper[5113]: source "/env/_master" Dec 08 17:41:54 crc kubenswrapper[5113]: set +o allexport Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v4_join_subnet_opt= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v6_join_subnet_opt= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v4_transit_switch_subnet_opt= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v6_transit_switch_subnet_opt= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "" != "" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: dns_name_resolver_enabled_flag= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: # This is needed so that converting clusters from GA to TP Dec 08 17:41:54 crc kubenswrapper[5113]: # will rollout control plane pods as well Dec 08 17:41:54 crc kubenswrapper[5113]: network_segmentation_enabled_flag= Dec 08 17:41:54 crc kubenswrapper[5113]: multi_network_enabled_flag= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "true" == "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "true" == "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "true" != "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: multi_network_enabled_flag="--enable-multi-network" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: route_advertisements_enable_flag= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: preconfigured_udn_addresses_enable_flag= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 17:41:54 crc kubenswrapper[5113]: multi_network_policy_enabled_flag= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "false" == "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 17:41:54 crc kubenswrapper[5113]: admin_network_policy_enabled_flag= Dec 08 17:41:54 crc kubenswrapper[5113]: if [[ "true" == "true" ]]; then Dec 08 17:41:54 crc kubenswrapper[5113]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: if [ "shared" == "shared" ]; then Dec 08 17:41:54 crc kubenswrapper[5113]: gateway_mode_flags="--gateway-mode shared" Dec 08 17:41:54 crc kubenswrapper[5113]: elif [ "shared" == "local" ]; then Dec 08 17:41:54 crc kubenswrapper[5113]: gateway_mode_flags="--gateway-mode local" Dec 08 17:41:54 crc kubenswrapper[5113]: else Dec 08 17:41:54 crc kubenswrapper[5113]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 17:41:54 crc kubenswrapper[5113]: exit 1 Dec 08 17:41:54 crc kubenswrapper[5113]: fi Dec 08 17:41:54 crc kubenswrapper[5113]: Dec 08 17:41:54 crc kubenswrapper[5113]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 17:41:54 crc kubenswrapper[5113]: exec /usr/bin/ovnkube \ Dec 08 17:41:54 crc kubenswrapper[5113]: --enable-interconnect \ Dec 08 17:41:54 crc kubenswrapper[5113]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 17:41:54 crc kubenswrapper[5113]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 17:41:54 crc kubenswrapper[5113]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 17:41:54 crc kubenswrapper[5113]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 17:41:54 crc kubenswrapper[5113]: --metrics-enable-pprof \ Dec 08 17:41:54 crc kubenswrapper[5113]: --metrics-enable-config-duration \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${ovn_v4_join_subnet_opt} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${ovn_v6_join_subnet_opt} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${dns_name_resolver_enabled_flag} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${persistent_ips_enabled_flag} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${multi_network_enabled_flag} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${network_segmentation_enabled_flag} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${gateway_mode_flags} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${route_advertisements_enable_flag} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 17:41:54 crc kubenswrapper[5113]: --enable-egress-ip=true \ Dec 08 17:41:54 crc kubenswrapper[5113]: --enable-egress-firewall=true \ Dec 08 17:41:54 crc kubenswrapper[5113]: --enable-egress-qos=true \ Dec 08 17:41:54 crc kubenswrapper[5113]: --enable-egress-service=true \ Dec 08 17:41:54 crc kubenswrapper[5113]: --enable-multicast \ Dec 08 17:41:54 crc kubenswrapper[5113]: --enable-multi-external-gateway=true \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${multi_network_policy_enabled_flag} \ Dec 08 17:41:54 crc kubenswrapper[5113]: ${admin_network_policy_enabled_flag} Dec 08 17:41:54 crc kubenswrapper[5113]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8h7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-k6xbq_openshift-ovn-kubernetes(88405869-34c6-458b-ab82-663f9a965335): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 17:41:54 crc kubenswrapper[5113]: > logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.059019 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5g6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf4d4_openshift-machine-config-operator(52658507-b084-49cb-a694-f012d44ccc82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.060486 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" podUID="88405869-34c6-458b-ab82-663f9a965335" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.061181 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0a3643f-fbed-4614-a9cb-87b71148c273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bc5j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.061616 5113 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5g6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-mf4d4_openshift-machine-config-operator(52658507-b084-49cb-a694-f012d44ccc82): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.062875 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.070066 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52658507-b084-49cb-a694-f012d44ccc82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf4d4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.079556 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.089097 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88405869-34c6-458b-ab82-663f9a965335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k6xbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.100007 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g9mkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4621882-3d98-4910-9263-5959d2302427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mrhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g9mkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.107675 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dff1756e-0a6c-408f-9d31-c7cc88d1d970\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dcsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.129734 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc63f51f-8fdb-44a8-bdff-ec60915754d9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e1ec5e3621120e1d45d214b07ea9461d74b8876f2ecb753c9cb64edceb6e9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://32d2671acebdb7c9bf493978733f31bdc688b2c39538d6accbde1f8acb545ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a976ae78e4000d634904dacd9850a4ef4b1a8f8466096b6d6a1a81bb1509d028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d08d6cf52478608ef265a49b4a56ce194ac8e56196751c94c2a0d8811c6fd23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://897669035e32774ca5030c245e526d1f4a891d11bf807b707598ca43dba686f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.136630 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.136671 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.136682 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.136705 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.136719 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.147159 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.158615 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.171234 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053be0da-d1f2-46d1-83b1-c9135f5c3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rzvvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.180684 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31bd15ba-1b8e-4d1b-8529-0018d98eba91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5743acdcc3be9e6004ceda4b55d50dd3f70a0f644add23d30c9f195736b2f15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.192292 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288a1135-0a6c-4b14-ac02-838923e33cfa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:45Z\\\",\\\"message\\\":\\\" 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3570322734/tls.crt::/tmp/serving-cert-3570322734/tls.key\\\\\\\"\\\\nI1208 17:41:45.041499 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:45.046325 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:45.046355 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:45.046402 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:45.046407 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:45.050153 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 17:41:45.050188 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:45.050200 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:45.050202 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:45.050206 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 17:41:45.050221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 17:41:45.053146 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF1208 17:41:45.053175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:45.053214 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.203059 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.213003 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.220360 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-ld988" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d90afc7e-e255-4843-b19d-3ab9233e2024\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlmst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ld988\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.234540 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150992a3-efc5-4dc2-a696-390ea843f8c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pjxmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.238935 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.238988 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.239002 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.239022 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.239057 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.246837 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4e830a7-649e-4ac5-a163-288b8fcc87f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://df4ea2631d89aee7ea27154e97131c2deb0604c986234fa00b81adc1f68380f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9a8565c2d48a5158ee81ea32bf94e1fa5918bd8ef77a2f4f13837dfdac8e5bc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01f1601b1902e4cd2d97dad1feb305cf7fabfb3963c75af1790b8a70b3f36673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.260706 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee666ae-d4a8-4de9-9e11-93bcc0e98ef1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6553617f01079f332992f16cd1a257b9e090879e7f3081be7900e1e7d2ed55a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99c545c56fb91bdf227d9980b73132ba07deeb048b298061085b5ebee0385451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ff208ec35eb72507f5ed9a811469dc40ecc4ab248f6b69e4b2875dbc4c2001b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.285289 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.325654 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4e830a7-649e-4ac5-a163-288b8fcc87f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://df4ea2631d89aee7ea27154e97131c2deb0604c986234fa00b81adc1f68380f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9a8565c2d48a5158ee81ea32bf94e1fa5918bd8ef77a2f4f13837dfdac8e5bc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01f1601b1902e4cd2d97dad1feb305cf7fabfb3963c75af1790b8a70b3f36673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.341380 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.341432 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.341455 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.341470 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.341482 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.344904 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.344982 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.345067 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.345108 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345122 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345159 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345209 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345223 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345183 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:56.345164991 +0000 UTC m=+82.060958107 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345336 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345390 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:56.345297614 +0000 UTC m=+82.061090730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345384 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.345409 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:56.345399196 +0000 UTC m=+82.061192312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.346239 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.346264 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.346312 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:56.346300976 +0000 UTC m=+82.062094092 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.365206 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee666ae-d4a8-4de9-9e11-93bcc0e98ef1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6553617f01079f332992f16cd1a257b9e090879e7f3081be7900e1e7d2ed55a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99c545c56fb91bdf227d9980b73132ba07deeb048b298061085b5ebee0385451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ff208ec35eb72507f5ed9a811469dc40ecc4ab248f6b69e4b2875dbc4c2001b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.408333 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.443925 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0a3643f-fbed-4614-a9cb-87b71148c273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bc5j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.444149 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.444218 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.444236 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.444260 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.444277 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.482391 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52658507-b084-49cb-a694-f012d44ccc82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf4d4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.525825 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.547474 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.547539 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.547551 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.547570 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.547582 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.563665 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88405869-34c6-458b-ab82-663f9a965335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k6xbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.606227 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g9mkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4621882-3d98-4910-9263-5959d2302427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mrhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g9mkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.643158 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dff1756e-0a6c-408f-9d31-c7cc88d1d970\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dcsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.647569 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.647683 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:41:56.647657224 +0000 UTC m=+82.363450340 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.647833 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.648027 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.648104 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs podName:d0a3643f-fbed-4614-a9cb-87b71148c273 nodeName:}" failed. No retries permitted until 2025-12-08 17:41:56.648096383 +0000 UTC m=+82.363889499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs") pod "network-metrics-daemon-bc5j2" (UID: "d0a3643f-fbed-4614-a9cb-87b71148c273") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.650914 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.650997 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.651056 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.651078 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.651091 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.679702 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.679749 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.679787 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.679888 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.680073 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.680160 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.680228 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:41:54 crc kubenswrapper[5113]: E1208 17:41:54.680377 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.684857 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.685727 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.687333 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.689263 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.691261 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.692828 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc63f51f-8fdb-44a8-bdff-ec60915754d9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e1ec5e3621120e1d45d214b07ea9461d74b8876f2ecb753c9cb64edceb6e9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://32d2671acebdb7c9bf493978733f31bdc688b2c39538d6accbde1f8acb545ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a976ae78e4000d634904dacd9850a4ef4b1a8f8466096b6d6a1a81bb1509d028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d08d6cf52478608ef265a49b4a56ce194ac8e56196751c94c2a0d8811c6fd23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://897669035e32774ca5030c245e526d1f4a891d11bf807b707598ca43dba686f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.694189 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.695759 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.696844 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.698100 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.699295 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.700876 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.702376 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.704931 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.709672 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.711633 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.712461 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.714054 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.719628 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.720517 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.721765 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.722934 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.726166 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.732901 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.733735 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.735081 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.735847 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.737367 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.752775 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.753276 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.753289 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.753329 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.753340 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.764962 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.773171 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.774358 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.777989 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.779322 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.780839 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.782688 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.784832 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.787063 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.788861 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.789589 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.790499 5113 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.790599 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.793331 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.794258 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.795645 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.797836 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.798416 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.799473 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.800929 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.801534 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.802929 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.803982 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.805896 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.806608 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053be0da-d1f2-46d1-83b1-c9135f5c3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rzvvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.806961 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.808140 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.808778 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.810057 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.811185 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.812991 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.813863 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.815253 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.816051 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.843628 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31bd15ba-1b8e-4d1b-8529-0018d98eba91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5743acdcc3be9e6004ceda4b55d50dd3f70a0f644add23d30c9f195736b2f15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.855414 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.855444 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.855453 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.855467 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.855479 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.886845 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288a1135-0a6c-4b14-ac02-838923e33cfa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:45Z\\\",\\\"message\\\":\\\" 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3570322734/tls.crt::/tmp/serving-cert-3570322734/tls.key\\\\\\\"\\\\nI1208 17:41:45.041499 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:45.046325 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:45.046355 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:45.046402 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:45.046407 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:45.050153 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 17:41:45.050188 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:45.050200 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:45.050202 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:45.050206 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 17:41:45.050221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 17:41:45.053146 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF1208 17:41:45.053175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:45.053214 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.924607 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.957316 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.957363 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.957374 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.957390 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.957404 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:54Z","lastTransitionTime":"2025-12-08T17:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:54 crc kubenswrapper[5113]: I1208 17:41:54.968082 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.002699 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-ld988" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d90afc7e-e255-4843-b19d-3ab9233e2024\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlmst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ld988\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.010821 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.011634 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:41:55 crc kubenswrapper[5113]: E1208 17:41:55.011919 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.048696 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150992a3-efc5-4dc2-a696-390ea843f8c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pjxmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.059668 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.059728 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.059741 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.059763 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.059776 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.083827 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g9mkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4621882-3d98-4910-9263-5959d2302427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mrhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g9mkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.124314 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dff1756e-0a6c-408f-9d31-c7cc88d1d970\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dcsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.163869 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.163962 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.163994 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.164026 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.164095 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.185359 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc63f51f-8fdb-44a8-bdff-ec60915754d9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e1ec5e3621120e1d45d214b07ea9461d74b8876f2ecb753c9cb64edceb6e9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://32d2671acebdb7c9bf493978733f31bdc688b2c39538d6accbde1f8acb545ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a976ae78e4000d634904dacd9850a4ef4b1a8f8466096b6d6a1a81bb1509d028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d08d6cf52478608ef265a49b4a56ce194ac8e56196751c94c2a0d8811c6fd23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://897669035e32774ca5030c245e526d1f4a891d11bf807b707598ca43dba686f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.209234 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.247996 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.266542 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.266804 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.266926 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.267119 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.267278 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.290268 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053be0da-d1f2-46d1-83b1-c9135f5c3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rzvvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.326072 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31bd15ba-1b8e-4d1b-8529-0018d98eba91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5743acdcc3be9e6004ceda4b55d50dd3f70a0f644add23d30c9f195736b2f15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.369766 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.369847 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.369867 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.369936 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.369994 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.371628 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288a1135-0a6c-4b14-ac02-838923e33cfa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:45Z\\\",\\\"message\\\":\\\" 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3570322734/tls.crt::/tmp/serving-cert-3570322734/tls.key\\\\\\\"\\\\nI1208 17:41:45.041499 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:45.046325 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:45.046355 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:45.046402 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:45.046407 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:45.050153 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 17:41:45.050188 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:45.050200 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:45.050202 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:45.050206 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 17:41:45.050221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 17:41:45.053146 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF1208 17:41:45.053175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:45.053214 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.402981 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.443883 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.473202 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.473247 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.473257 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.473273 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.473287 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.484350 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-ld988" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d90afc7e-e255-4843-b19d-3ab9233e2024\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlmst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ld988\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.539389 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150992a3-efc5-4dc2-a696-390ea843f8c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pjxmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.566758 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4e830a7-649e-4ac5-a163-288b8fcc87f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://df4ea2631d89aee7ea27154e97131c2deb0604c986234fa00b81adc1f68380f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9a8565c2d48a5158ee81ea32bf94e1fa5918bd8ef77a2f4f13837dfdac8e5bc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01f1601b1902e4cd2d97dad1feb305cf7fabfb3963c75af1790b8a70b3f36673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.579289 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.579361 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.579388 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.579422 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.579446 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.604951 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee666ae-d4a8-4de9-9e11-93bcc0e98ef1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6553617f01079f332992f16cd1a257b9e090879e7f3081be7900e1e7d2ed55a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99c545c56fb91bdf227d9980b73132ba07deeb048b298061085b5ebee0385451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ff208ec35eb72507f5ed9a811469dc40ecc4ab248f6b69e4b2875dbc4c2001b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.645070 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.681620 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.681685 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.681750 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.681769 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.681782 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.684031 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0a3643f-fbed-4614-a9cb-87b71148c273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bc5j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.725646 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52658507-b084-49cb-a694-f012d44ccc82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf4d4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.764873 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.783674 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.783724 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.783739 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.783755 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.783766 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.802881 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88405869-34c6-458b-ab82-663f9a965335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k6xbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.886075 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.886127 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.886146 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.886166 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.886179 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.988182 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.988232 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.988245 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.988262 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:55 crc kubenswrapper[5113]: I1208 17:41:55.988273 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:55Z","lastTransitionTime":"2025-12-08T17:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.090494 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.090556 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.090574 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.090598 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.090616 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.193689 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.193740 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.193750 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.193766 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.193780 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.296622 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.296674 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.296686 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.296703 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.296714 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.367307 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.367356 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367478 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367492 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367491 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.367513 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367524 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.367541 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367545 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367670 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367695 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:00.367676853 +0000 UTC m=+86.083469969 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367697 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367503 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.367735 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:00.367713274 +0000 UTC m=+86.083506420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.368013 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:00.367974329 +0000 UTC m=+86.083767485 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.368104 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:00.368089982 +0000 UTC m=+86.083883138 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.398776 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.398836 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.398846 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.398870 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.398881 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.501239 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.501295 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.501312 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.501334 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.501350 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.607840 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.607900 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.607915 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.607939 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.607955 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.671940 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.672142 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.672201 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:00.672168111 +0000 UTC m=+86.387961227 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.672289 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.672388 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs podName:d0a3643f-fbed-4614-a9cb-87b71148c273 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:00.672364845 +0000 UTC m=+86.388157991 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs") pod "network-metrics-daemon-bc5j2" (UID: "d0a3643f-fbed-4614-a9cb-87b71148c273") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.679457 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.679527 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.679490 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.679468 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.679660 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.679831 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.679990 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:56 crc kubenswrapper[5113]: E1208 17:41:56.680158 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.710717 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.710774 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.710787 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.710806 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.710820 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.813469 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.813527 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.813538 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.813558 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.813569 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.915818 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.915902 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.915940 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.915958 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:56 crc kubenswrapper[5113]: I1208 17:41:56.915967 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:56Z","lastTransitionTime":"2025-12-08T17:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.018655 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.018724 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.018736 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.018752 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.018763 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.120393 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.120459 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.120471 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.120487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.120526 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.223022 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.223099 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.223108 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.223124 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.223133 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.325422 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.325508 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.325536 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.325572 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.325595 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.428774 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.429153 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.429238 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.429436 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.429505 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.531872 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.531968 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.531979 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.531996 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.532007 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.634430 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.634491 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.634501 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.634516 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.634525 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.737584 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.737639 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.737653 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.737668 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.737676 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.839996 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.840079 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.840097 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.840117 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.840134 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.943320 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.943406 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.943424 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.943450 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:57 crc kubenswrapper[5113]: I1208 17:41:57.943463 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:57Z","lastTransitionTime":"2025-12-08T17:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.045983 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.046092 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.046109 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.046128 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.046142 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.148980 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.149065 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.149080 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.149100 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.149111 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.251500 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.251578 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.251598 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.251624 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.251652 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.353391 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.353444 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.353457 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.353474 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.353485 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.455638 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.456092 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.456242 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.456361 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.456453 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.558998 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.559073 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.559083 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.559100 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.559110 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.661784 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.661830 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.661840 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.661855 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.661864 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.679860 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.679949 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.679911 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.679872 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:41:58 crc kubenswrapper[5113]: E1208 17:41:58.680207 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:41:58 crc kubenswrapper[5113]: E1208 17:41:58.680297 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:41:58 crc kubenswrapper[5113]: E1208 17:41:58.680649 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:41:58 crc kubenswrapper[5113]: E1208 17:41:58.680882 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.763827 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.763928 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.763950 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.763975 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.763997 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.866983 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.867124 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.867152 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.867184 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.867206 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.969832 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.969955 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.969982 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.970013 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:58 crc kubenswrapper[5113]: I1208 17:41:58.970086 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:58Z","lastTransitionTime":"2025-12-08T17:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.071934 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.072095 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.072124 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.072194 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.072217 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.175520 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.175618 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.175652 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.175683 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.175708 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.278112 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.278197 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.278240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.278275 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.278299 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.380114 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.380154 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.380163 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.380176 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.380186 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.482795 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.482851 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.482861 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.482876 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.482885 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.585259 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.585340 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.585358 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.585387 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.585406 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.687828 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.687902 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.687924 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.687954 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.687975 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.791028 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.791121 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.791141 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.791164 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.791185 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.893759 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.893810 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.893825 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.893842 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.893855 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.996547 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.996626 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.996640 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.996658 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:41:59 crc kubenswrapper[5113]: I1208 17:41:59.996691 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:41:59Z","lastTransitionTime":"2025-12-08T17:41:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.027173 5113 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.099660 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.099750 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.099890 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.099921 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.099941 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.203393 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.203474 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.203495 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.203582 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.203605 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.306733 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.306850 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.306864 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.306890 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.306903 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.409359 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.409429 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.409446 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.409471 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.409489 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.415186 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.415291 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.415352 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415385 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415422 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.415427 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415445 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415538 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:08.415512971 +0000 UTC m=+94.131306117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415658 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415746 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:08.415720706 +0000 UTC m=+94.131513862 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415878 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415904 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415925 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.415990 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:08.415968471 +0000 UTC m=+94.131761627 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.416108 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.416190 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:08.416168616 +0000 UTC m=+94.131961772 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.512999 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.513121 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.513144 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.513172 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.513193 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.616505 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.616629 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.616675 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.616777 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.616840 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.679788 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.679968 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.679985 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.680380 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.680217 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.680177 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.680722 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.680796 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.718576 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.718749 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:08.718693941 +0000 UTC m=+94.434487097 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.718991 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.719221 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: E1208 17:42:00.719330 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs podName:d0a3643f-fbed-4614-a9cb-87b71148c273 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:08.719304995 +0000 UTC m=+94.435098151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs") pod "network-metrics-daemon-bc5j2" (UID: "d0a3643f-fbed-4614-a9cb-87b71148c273") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.720623 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.720693 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.720720 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.720770 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.720796 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.822919 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.822985 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.822998 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.823017 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.823030 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.924895 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.924962 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.924980 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.925003 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:00 crc kubenswrapper[5113]: I1208 17:42:00.925018 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:00Z","lastTransitionTime":"2025-12-08T17:42:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.027272 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.027343 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.027361 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.027387 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.027400 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.130648 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.130747 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.130776 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.130810 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.130833 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.234093 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.234165 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.234181 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.234209 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.234223 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.336490 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.336542 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.336552 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.336571 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.336584 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.439225 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.439274 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.439288 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.439306 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.439319 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.542308 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.542379 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.542398 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.542423 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.542441 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.644801 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.644860 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.644872 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.644890 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.644904 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.667059 5113 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.748432 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.748493 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.748508 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.748529 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.748545 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.850896 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.850974 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.850993 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.851022 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.851064 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.953710 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.954081 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.954227 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.954364 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:01 crc kubenswrapper[5113]: I1208 17:42:01.954448 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:01Z","lastTransitionTime":"2025-12-08T17:42:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.056896 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.056972 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.056991 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.057013 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.057027 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.160306 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.160363 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.160376 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.160397 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.160407 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.263239 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.263294 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.263304 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.263320 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.263329 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.366982 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.367113 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.367142 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.367178 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.367203 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.470024 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.470109 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.470126 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.470148 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.470159 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.572769 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.572835 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.572853 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.572880 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.572900 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.676423 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.676487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.676503 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.676524 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.676537 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.679875 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:02 crc kubenswrapper[5113]: E1208 17:42:02.680020 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.680390 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.680402 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.680639 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:02 crc kubenswrapper[5113]: E1208 17:42:02.680639 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:02 crc kubenswrapper[5113]: E1208 17:42:02.680805 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:02 crc kubenswrapper[5113]: E1208 17:42:02.680994 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.778399 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.778458 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.778471 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.778490 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.778502 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.881279 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.881343 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.881362 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.881384 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.881399 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.984288 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.984363 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.984383 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.984410 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:02 crc kubenswrapper[5113]: I1208 17:42:02.984434 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:02Z","lastTransitionTime":"2025-12-08T17:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.087485 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.087552 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.087568 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.087590 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.087605 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.191023 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.191113 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.191137 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.191164 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.191184 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.294761 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.294836 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.294855 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.294882 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.294901 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.399469 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.399541 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.399555 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.399574 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.399588 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.501948 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.502005 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.502016 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.502054 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.502067 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.604198 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.604265 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.604278 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.604299 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.604315 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.706326 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.706389 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.706401 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.706423 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.706435 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.714443 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.714495 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.714509 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.714527 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.714539 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: E1208 17:42:03.726612 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.732343 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.732391 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.732403 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.732419 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.732429 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: E1208 17:42:03.743882 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.748311 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.748349 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.748359 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.748377 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.748389 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: E1208 17:42:03.759026 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.763598 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.763649 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.763662 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.763683 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.763697 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: E1208 17:42:03.773358 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.777164 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.777233 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.777253 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.777290 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.777307 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: E1208 17:42:03.789057 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400452Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861252Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T17:42:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"80c2bcad-2593-4a10-ab9b-2aa8b813a421\\\",\\\"systemUUID\\\":\\\"763bf7f3-a73d-446d-8674-09d6015bdd0a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:03 crc kubenswrapper[5113]: E1208 17:42:03.789271 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.808486 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.808542 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.808555 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.808575 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.808590 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.911591 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.911676 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.911694 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.911719 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:03 crc kubenswrapper[5113]: I1208 17:42:03.911744 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:03Z","lastTransitionTime":"2025-12-08T17:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.014312 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.014367 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.014377 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.014393 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.014402 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.117067 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.117120 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.117132 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.117148 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.117162 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.219611 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.219657 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.219666 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.219680 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.219688 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.322281 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.322334 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.322347 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.322365 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.322377 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.425322 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.425391 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.425405 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.425427 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.425443 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.527837 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.527884 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.527894 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.527907 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.527916 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.630538 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.630611 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.630628 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.630651 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.630666 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.679700 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:04 crc kubenswrapper[5113]: E1208 17:42:04.679916 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.679994 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:04 crc kubenswrapper[5113]: E1208 17:42:04.680229 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.680298 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.680365 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:04 crc kubenswrapper[5113]: E1208 17:42:04.680606 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:04 crc kubenswrapper[5113]: E1208 17:42:04.680789 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.696208 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.708837 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88405869-34c6-458b-ab82-663f9a965335\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8h7r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-k6xbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.721122 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g9mkp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4621882-3d98-4910-9263-5959d2302427\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mrhv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g9mkp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.728542 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jcdp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dff1756e-0a6c-408f-9d31-c7cc88d1d970\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dcsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:53Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jcdp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.733001 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.733065 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.733083 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.733104 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.733119 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.758540 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bc63f51f-8fdb-44a8-bdff-ec60915754d9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://2e1ec5e3621120e1d45d214b07ea9461d74b8876f2ecb753c9cb64edceb6e9dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://32d2671acebdb7c9bf493978733f31bdc688b2c39538d6accbde1f8acb545ef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a976ae78e4000d634904dacd9850a4ef4b1a8f8466096b6d6a1a81bb1509d028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d08d6cf52478608ef265a49b4a56ce194ac8e56196751c94c2a0d8811c6fd23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://897669035e32774ca5030c245e526d1f4a891d11bf807b707598ca43dba686f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://172999edf1680b0afe2566240ffad5a2201e3631da0954fc97e4074ffade7651\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a057f71acfc959bd210de68b7c4e4051c6f90102ccd9c6461ca54dde7e4d9451\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da6fb033f5275185bfc7f0126a1e1ffa8be6ed8f45dbf4d1edbf9616ba6c4db8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.778729 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.795678 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.817651 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"053be0da-d1f2-46d1-83b1-c9135f5c3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l6j4f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rzvvg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.826490 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31bd15ba-1b8e-4d1b-8529-0018d98eba91\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5743acdcc3be9e6004ceda4b55d50dd3f70a0f644add23d30c9f195736b2f15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a631bfc6e67eebd3079551c1e098d5c9b5dfa9fdb3bdc5f1b392491dd6de1542\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.834985 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.835024 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.835056 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.835071 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.835080 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.838901 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"288a1135-0a6c-4b14-ac02-838923e33cfa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T17:41:45Z\\\",\\\"message\\\":\\\" 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3570322734/tls.crt::/tmp/serving-cert-3570322734/tls.key\\\\\\\"\\\\nI1208 17:41:45.041499 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 17:41:45.046325 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 17:41:45.046355 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 17:41:45.046402 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 17:41:45.046407 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 17:41:45.050153 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 17:41:45.050188 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050193 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 17:41:45.050197 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 17:41:45.050200 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 17:41:45.050202 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 17:41:45.050206 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1208 17:41:45.050221 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI1208 17:41:45.053146 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nF1208 17:41:45.053175 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 17:41:45.053214 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T17:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.847690 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.855876 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.862512 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-ld988" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d90afc7e-e255-4843-b19d-3ab9233e2024\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlmst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-ld988\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.879442 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"150992a3-efc5-4dc2-a696-390ea843f8c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xqg2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pjxmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.891493 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4e830a7-649e-4ac5-a163-288b8fcc87f3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://df4ea2631d89aee7ea27154e97131c2deb0604c986234fa00b81adc1f68380f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9a8565c2d48a5158ee81ea32bf94e1fa5918bd8ef77a2f4f13837dfdac8e5bc5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01f1601b1902e4cd2d97dad1feb305cf7fabfb3963c75af1790b8a70b3f36673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.902696 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee666ae-d4a8-4de9-9e11-93bcc0e98ef1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:40:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6553617f01079f332992f16cd1a257b9e090879e7f3081be7900e1e7d2ed55a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99c545c56fb91bdf227d9980b73132ba07deeb048b298061085b5ebee0385451\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ff208ec35eb72507f5ed9a811469dc40ecc4ab248f6b69e4b2875dbc4c2001b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T17:40:37Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a6e1b6d04154124bb495f1109ed575526e708cc2026b9f35f67f7fc1c0fc3f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T17:40:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T17:40:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:40:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.912852 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.922211 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0a3643f-fbed-4614-a9cb-87b71148c273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vwdx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bc5j2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.930862 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52658507-b084-49cb-a694-f012d44ccc82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T17:41:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h5g6p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T17:41:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mf4d4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.937329 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.937396 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.937407 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.937427 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:04 crc kubenswrapper[5113]: I1208 17:42:04.937438 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:04Z","lastTransitionTime":"2025-12-08T17:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.039608 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.039655 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.039665 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.039681 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.039690 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.142006 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.142122 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.142144 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.142174 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.142193 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.245342 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.245461 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.245487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.245525 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.245553 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.347884 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.347925 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.347935 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.347949 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.347958 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.451181 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.451240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.451252 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.451271 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.451283 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.553668 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.553721 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.553732 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.553752 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.553766 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.656539 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.656591 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.656602 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.656620 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.656632 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.759275 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.759326 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.759340 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.759358 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.759373 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.861823 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.861877 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.861890 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.861909 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.861919 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.964712 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.964759 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.964790 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.964804 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:05 crc kubenswrapper[5113]: I1208 17:42:05.964813 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:05Z","lastTransitionTime":"2025-12-08T17:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.066854 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.066901 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.066911 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.066926 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.066936 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.101485 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ld988" event={"ID":"d90afc7e-e255-4843-b19d-3ab9233e2024","Type":"ContainerStarted","Data":"334fb7e55a60cb5e7a23986b0f6fe1c0a3b3feaa41e7268a3fba53d2fd5814ce"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.104969 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"95036e3f975967f98ac353899606e455d2347d20685c6e75212d8f9117d62dfb"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.105067 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"6c6f7d23259c967233d304485ffc3fde09517f73eb7d3feefa00a5aaccf10807"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.108887 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f" exitCode=0 Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.108979 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.170596 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.170654 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.170667 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.170687 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.170700 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.259832 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=14.259802133 podStartE2EDuration="14.259802133s" podCreationTimestamp="2025-12-08 17:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:06.259282002 +0000 UTC m=+91.975075128" watchObservedRunningTime="2025-12-08 17:42:06.259802133 +0000 UTC m=+91.975595269" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.274786 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.274849 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.274860 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.274874 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.274883 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.345251 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=14.345230162 podStartE2EDuration="14.345230162s" podCreationTimestamp="2025-12-08 17:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:06.32468179 +0000 UTC m=+92.040474906" watchObservedRunningTime="2025-12-08 17:42:06.345230162 +0000 UTC m=+92.061023278" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.377166 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.377206 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.377217 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.377231 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.377242 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.419325 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ld988" podStartSLOduration=69.419291455 podStartE2EDuration="1m9.419291455s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:06.396454662 +0000 UTC m=+92.112247788" watchObservedRunningTime="2025-12-08 17:42:06.419291455 +0000 UTC m=+92.135084571" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.456783 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=13.456757947 podStartE2EDuration="13.456757947s" podCreationTimestamp="2025-12-08 17:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:06.440069122 +0000 UTC m=+92.155862238" watchObservedRunningTime="2025-12-08 17:42:06.456757947 +0000 UTC m=+92.172551063" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.459309 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=13.458404244 podStartE2EDuration="13.458404244s" podCreationTimestamp="2025-12-08 17:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:06.45602322 +0000 UTC m=+92.171816336" watchObservedRunningTime="2025-12-08 17:42:06.458404244 +0000 UTC m=+92.174197370" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.479591 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.479660 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.479673 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.479691 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.479706 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.582711 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.583014 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.583140 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.583260 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.583345 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.679280 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.679492 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:06 crc kubenswrapper[5113]: E1208 17:42:06.679721 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:06 crc kubenswrapper[5113]: E1208 17:42:06.680196 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.680753 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.680822 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:06 crc kubenswrapper[5113]: E1208 17:42:06.680932 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:06 crc kubenswrapper[5113]: E1208 17:42:06.681493 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.685454 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.685511 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.685524 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.685548 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.685559 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.787994 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.788119 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.788133 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.788158 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.788244 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.891615 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.891681 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.891694 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.891718 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.891736 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.994899 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.994951 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.994961 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.994975 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:06 crc kubenswrapper[5113]: I1208 17:42:06.994987 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:06Z","lastTransitionTime":"2025-12-08T17:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.097404 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.097464 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.097476 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.097494 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.097512 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.106200 5113 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.114837 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.114902 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.114917 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.116815 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"1356527a16e14eb4e8bcf84bcd924d4f58822a17892d29a957924ce5b9959dd0"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.199596 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.199646 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.199657 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.199672 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.199682 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.302522 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.302598 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.302614 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.302637 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.302651 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.405616 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.405685 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.405707 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.405729 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.405744 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.508316 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.508384 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.508398 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.508418 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.508436 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.610753 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.610825 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.610842 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.610867 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.610886 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.713351 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.713399 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.713411 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.713426 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.713438 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.816491 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.816555 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.816576 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.816602 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.816649 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.918991 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.919062 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.919076 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.919092 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:07 crc kubenswrapper[5113]: I1208 17:42:07.919103 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:07Z","lastTransitionTime":"2025-12-08T17:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.024454 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.024944 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.024957 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.024978 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.024990 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.129879 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.129936 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.129947 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.129991 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.130009 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.142919 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.232599 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.232673 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.232688 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.232706 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.232723 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.334627 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.334692 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.334709 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.334731 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.334748 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.415729 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.415861 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.416117 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.416239 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:24.416211973 +0000 UTC m=+110.132005089 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.417431 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.417483 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.417499 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.417587 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:24.417562144 +0000 UTC m=+110.133355430 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.436903 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.436956 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.436968 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.436985 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.436997 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.516879 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.516926 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.517132 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.517189 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.517230 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.517242 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.517216 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:24.517196351 +0000 UTC m=+110.232989457 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.517372 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:24.517348915 +0000 UTC m=+110.233142031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.541763 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.541840 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.541851 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.541872 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.541888 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.644919 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.644980 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.644991 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.645012 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.645023 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.679729 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.679936 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.680069 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.680124 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.680095 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.680240 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.680354 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.680502 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.720011 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.720296 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:24.72023771 +0000 UTC m=+110.436030856 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.720611 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.720841 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: E1208 17:42:08.720968 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs podName:d0a3643f-fbed-4614-a9cb-87b71148c273 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:24.720950306 +0000 UTC m=+110.436743442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs") pod "network-metrics-daemon-bc5j2" (UID: "d0a3643f-fbed-4614-a9cb-87b71148c273") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.747814 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.747925 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.747946 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.747966 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.747981 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.870644 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.870732 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.870751 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.870774 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.870790 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.975281 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.975320 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.975330 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.975344 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:08 crc kubenswrapper[5113]: I1208 17:42:08.975353 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:08Z","lastTransitionTime":"2025-12-08T17:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.078255 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.078302 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.078316 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.078336 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.078348 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.158345 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" event={"ID":"88405869-34c6-458b-ab82-663f9a965335","Type":"ContainerStarted","Data":"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.161933 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerStarted","Data":"829b289d54add0f05d5283bf8dbc0bac1f2b7b46736de35585f1ad519b214d41"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.165599 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"f6f7c021a2fcc0468a28a7246bb0df375a7b306c4799388b2ae1634b8cdc5d78"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.185668 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.185745 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.185768 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.185787 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.185838 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.204822 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.216259 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jcdp7" event={"ID":"dff1756e-0a6c-408f-9d31-c7cc88d1d970","Type":"ContainerStarted","Data":"7ca9bd9e01b5f3e6edd4cda3b3cdca1bb58a3b334916ae221141ce61c9b2fb1e"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.240709 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jcdp7" podStartSLOduration=72.240687499 podStartE2EDuration="1m12.240687499s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:09.240384162 +0000 UTC m=+94.956177278" watchObservedRunningTime="2025-12-08 17:42:09.240687499 +0000 UTC m=+94.956480615" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.288440 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.288505 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.288521 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.288547 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.288562 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.391206 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.391265 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.391283 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.391303 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.391315 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.493297 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.493342 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.493353 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.493367 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.493396 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.595547 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.595601 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.595613 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.595630 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.595642 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.679922 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:42:09 crc kubenswrapper[5113]: E1208 17:42:09.680194 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.699491 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.699547 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.699559 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.699576 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.699588 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.802331 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.802378 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.802390 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.802406 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.802416 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.906659 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.907176 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.907190 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.907208 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:09 crc kubenswrapper[5113]: I1208 17:42:09.907217 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:09Z","lastTransitionTime":"2025-12-08T17:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.009872 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.009927 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.009957 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.009979 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.009994 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.111579 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.111615 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.111623 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.111637 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.111647 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.213922 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.214465 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.214485 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.214513 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.214532 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.222406 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"8593bd67cbe1dd747ee8a96ae19dcc6297c53dab71d7396268e817680840dada"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.224559 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" event={"ID":"88405869-34c6-458b-ab82-663f9a965335","Type":"ContainerStarted","Data":"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.226738 5113 generic.go:358] "Generic (PLEG): container finished" podID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" containerID="829b289d54add0f05d5283bf8dbc0bac1f2b7b46736de35585f1ad519b214d41" exitCode=0 Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.226842 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerDied","Data":"829b289d54add0f05d5283bf8dbc0bac1f2b7b46736de35585f1ad519b214d41"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.234371 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"22457bc6522b85bcbe013eced3ac8166bd69e535896e66593d52ccc2b8f7fba2"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.241620 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.245155 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g9mkp" event={"ID":"c4621882-3d98-4910-9263-5959d2302427","Type":"ContainerStarted","Data":"2bb12e71b3d999c009aafdef8c275c416a63bd71bfcab8ce7926b55d3bb95371"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.261206 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" podStartSLOduration=73.261187008 podStartE2EDuration="1m13.261187008s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:10.259479899 +0000 UTC m=+95.975273035" watchObservedRunningTime="2025-12-08 17:42:10.261187008 +0000 UTC m=+95.976980114" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.300674 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podStartSLOduration=73.300653124 podStartE2EDuration="1m13.300653124s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:10.300620753 +0000 UTC m=+96.016413889" watchObservedRunningTime="2025-12-08 17:42:10.300653124 +0000 UTC m=+96.016446240" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.317069 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.317107 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.317116 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.317129 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.317139 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.419422 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.419493 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.419511 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.419534 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.419553 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.523508 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.523555 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.523566 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.523590 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.523601 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.607027 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g9mkp" podStartSLOduration=73.607009494 podStartE2EDuration="1m13.607009494s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:10.606862001 +0000 UTC m=+96.322655117" watchObservedRunningTime="2025-12-08 17:42:10.607009494 +0000 UTC m=+96.322802630" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.627166 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.627238 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.627253 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.627314 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.627334 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.679734 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.679734 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:10 crc kubenswrapper[5113]: E1208 17:42:10.679902 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.679930 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:10 crc kubenswrapper[5113]: E1208 17:42:10.680163 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:10 crc kubenswrapper[5113]: E1208 17:42:10.680274 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.680334 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:10 crc kubenswrapper[5113]: E1208 17:42:10.680594 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.729558 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.729602 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.729612 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.729626 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.729635 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.832202 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.832596 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.832606 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.832621 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.832632 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.935280 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.935367 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.935393 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.935424 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:10 crc kubenswrapper[5113]: I1208 17:42:10.935468 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:10Z","lastTransitionTime":"2025-12-08T17:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.039772 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.039821 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.039834 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.039853 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.039868 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.142522 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.142581 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.142592 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.142609 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.142622 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.245787 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.245841 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.245853 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.245871 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.245892 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.348560 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.348658 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.348683 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.348716 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.348739 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.452263 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.452301 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.452314 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.452334 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.452351 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.554791 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.554834 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.554846 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.554860 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.554870 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.656505 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.656906 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.657149 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.657240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.657315 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.759427 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.759481 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.759491 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.759504 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.759512 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.861249 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.861298 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.861308 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.861324 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.861334 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.963935 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.964429 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.964444 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.964462 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:11 crc kubenswrapper[5113]: I1208 17:42:11.964476 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:11Z","lastTransitionTime":"2025-12-08T17:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.066732 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.066778 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.066791 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.066810 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.066821 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.169058 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.169115 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.169128 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.169148 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.169162 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.258935 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerStarted","Data":"4992b98ed6be145ef84cda7eb83ca348c59081bff1d93ead1f8136052c6eb381"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.264651 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.270618 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.270678 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.270692 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.270710 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.270722 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.372800 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.372859 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.372873 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.372892 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.372908 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.476696 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.476797 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.476825 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.476864 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.476889 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.579925 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.580004 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.580029 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.580095 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.580125 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.679285 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.679348 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:12 crc kubenswrapper[5113]: E1208 17:42:12.679456 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.679602 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.679666 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:12 crc kubenswrapper[5113]: E1208 17:42:12.679814 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:12 crc kubenswrapper[5113]: E1208 17:42:12.679878 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:12 crc kubenswrapper[5113]: E1208 17:42:12.680027 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.683191 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.683238 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.683251 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.683276 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.683306 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.786028 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.786160 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.786188 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.786221 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.786245 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.889617 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.889699 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.889726 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.889777 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.889802 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.992108 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.992187 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.992210 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.992244 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:12 crc kubenswrapper[5113]: I1208 17:42:12.992267 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:12Z","lastTransitionTime":"2025-12-08T17:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.095563 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.095659 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.095688 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.095723 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.095751 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.198396 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.198442 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.198451 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.198470 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.198480 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.270153 5113 generic.go:358] "Generic (PLEG): container finished" podID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" containerID="4992b98ed6be145ef84cda7eb83ca348c59081bff1d93ead1f8136052c6eb381" exitCode=0 Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.270211 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerDied","Data":"4992b98ed6be145ef84cda7eb83ca348c59081bff1d93ead1f8136052c6eb381"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.300677 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.300771 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.300795 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.300827 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.300855 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.402983 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.403113 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.403146 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.403178 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.403206 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.505747 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.505823 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.505853 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.505886 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.505910 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.608972 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.609082 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.609101 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.609120 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.609142 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.711610 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.711687 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.711714 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.711749 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.711775 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.814557 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.814611 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.814628 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.814648 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.814659 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.917348 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.917396 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.917408 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.917427 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.917439 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.995183 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.995243 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.995260 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.995280 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 17:42:13 crc kubenswrapper[5113]: I1208 17:42:13.995292 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T17:42:13Z","lastTransitionTime":"2025-12-08T17:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 17:42:14 crc kubenswrapper[5113]: I1208 17:42:14.036540 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967"] Dec 08 17:42:14 crc kubenswrapper[5113]: I1208 17:42:14.690540 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:42:14 crc kubenswrapper[5113]: I1208 17:42:14.703204 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.669689 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.670186 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:16 crc kubenswrapper[5113]: E1208 17:42:16.670386 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.669863 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.669850 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:16 crc kubenswrapper[5113]: E1208 17:42:16.670505 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.669844 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:16 crc kubenswrapper[5113]: E1208 17:42:16.670881 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:16 crc kubenswrapper[5113]: E1208 17:42:16.671013 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.673150 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.673731 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.673752 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.674139 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.815199 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b39798e2-7067-4e79-8c07-d677ede72a6e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.815515 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b39798e2-7067-4e79-8c07-d677ede72a6e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.815535 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b39798e2-7067-4e79-8c07-d677ede72a6e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.815554 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b39798e2-7067-4e79-8c07-d677ede72a6e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.815569 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39798e2-7067-4e79-8c07-d677ede72a6e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.916918 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b39798e2-7067-4e79-8c07-d677ede72a6e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.916991 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b39798e2-7067-4e79-8c07-d677ede72a6e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.917010 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b39798e2-7067-4e79-8c07-d677ede72a6e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.917056 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b39798e2-7067-4e79-8c07-d677ede72a6e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.917075 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39798e2-7067-4e79-8c07-d677ede72a6e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.917141 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b39798e2-7067-4e79-8c07-d677ede72a6e-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.917617 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b39798e2-7067-4e79-8c07-d677ede72a6e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.918575 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b39798e2-7067-4e79-8c07-d677ede72a6e-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.924899 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b39798e2-7067-4e79-8c07-d677ede72a6e-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.935539 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b39798e2-7067-4e79-8c07-d677ede72a6e-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-gw967\" (UID: \"b39798e2-7067-4e79-8c07-d677ede72a6e\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:16 crc kubenswrapper[5113]: I1208 17:42:16.996278 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" Dec 08 17:42:17 crc kubenswrapper[5113]: W1208 17:42:17.013526 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb39798e2_7067_4e79_8c07_d677ede72a6e.slice/crio-484b3b58ac6bb0714d6f0e3c95006ae13f562310152f0a3c03b282fcf3219c9e WatchSource:0}: Error finding container 484b3b58ac6bb0714d6f0e3c95006ae13f562310152f0a3c03b282fcf3219c9e: Status 404 returned error can't find the container with id 484b3b58ac6bb0714d6f0e3c95006ae13f562310152f0a3c03b282fcf3219c9e Dec 08 17:42:17 crc kubenswrapper[5113]: I1208 17:42:17.289627 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerStarted","Data":"ea88c4d64978f2e5b441f5e8f240eb7a5c291d6cc83dce16514a523c7b65cbc8"} Dec 08 17:42:17 crc kubenswrapper[5113]: I1208 17:42:17.298878 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerStarted","Data":"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e"} Dec 08 17:42:17 crc kubenswrapper[5113]: I1208 17:42:17.300941 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" event={"ID":"b39798e2-7067-4e79-8c07-d677ede72a6e","Type":"ContainerStarted","Data":"484b3b58ac6bb0714d6f0e3c95006ae13f562310152f0a3c03b282fcf3219c9e"} Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.308021 5113 generic.go:358] "Generic (PLEG): container finished" podID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" containerID="ea88c4d64978f2e5b441f5e8f240eb7a5c291d6cc83dce16514a523c7b65cbc8" exitCode=0 Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.308174 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerDied","Data":"ea88c4d64978f2e5b441f5e8f240eb7a5c291d6cc83dce16514a523c7b65cbc8"} Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.308693 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.308719 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.308732 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.359611 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podStartSLOduration=81.359597825 podStartE2EDuration="1m21.359597825s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:18.35803284 +0000 UTC m=+104.073825956" watchObservedRunningTime="2025-12-08 17:42:18.359597825 +0000 UTC m=+104.075390941" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.469911 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.471114 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.679239 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.679301 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:18 crc kubenswrapper[5113]: E1208 17:42:18.679467 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:18 crc kubenswrapper[5113]: E1208 17:42:18.679626 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.679732 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:18 crc kubenswrapper[5113]: I1208 17:42:18.679956 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:18 crc kubenswrapper[5113]: E1208 17:42:18.679952 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:18 crc kubenswrapper[5113]: E1208 17:42:18.680076 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:20 crc kubenswrapper[5113]: I1208 17:42:20.679986 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:20 crc kubenswrapper[5113]: E1208 17:42:20.680578 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:20 crc kubenswrapper[5113]: I1208 17:42:20.680120 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:20 crc kubenswrapper[5113]: E1208 17:42:20.680657 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:20 crc kubenswrapper[5113]: I1208 17:42:20.680136 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:20 crc kubenswrapper[5113]: E1208 17:42:20.680711 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:20 crc kubenswrapper[5113]: I1208 17:42:20.680100 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:20 crc kubenswrapper[5113]: E1208 17:42:20.680764 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:21 crc kubenswrapper[5113]: I1208 17:42:21.180949 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bc5j2"] Dec 08 17:42:21 crc kubenswrapper[5113]: I1208 17:42:21.322320 5113 generic.go:358] "Generic (PLEG): container finished" podID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" containerID="ef25a019a1a8ca2cfb79b6062ee98db7b9bab48fae3adc17ff888d468dbca18d" exitCode=0 Dec 08 17:42:21 crc kubenswrapper[5113]: I1208 17:42:21.322409 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerDied","Data":"ef25a019a1a8ca2cfb79b6062ee98db7b9bab48fae3adc17ff888d468dbca18d"} Dec 08 17:42:21 crc kubenswrapper[5113]: I1208 17:42:21.325097 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" event={"ID":"b39798e2-7067-4e79-8c07-d677ede72a6e","Type":"ContainerStarted","Data":"39545ce7606770768e30e3da50a34496428b81668f8c3b24642b6e8926281082"} Dec 08 17:42:21 crc kubenswrapper[5113]: I1208 17:42:21.325418 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:21 crc kubenswrapper[5113]: E1208 17:42:21.325668 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:21 crc kubenswrapper[5113]: I1208 17:42:21.370699 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-gw967" podStartSLOduration=84.370678139 podStartE2EDuration="1m24.370678139s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:21.369781889 +0000 UTC m=+107.085575005" watchObservedRunningTime="2025-12-08 17:42:21.370678139 +0000 UTC m=+107.086471255" Dec 08 17:42:22 crc kubenswrapper[5113]: I1208 17:42:22.685617 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:22 crc kubenswrapper[5113]: E1208 17:42:22.685754 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:22 crc kubenswrapper[5113]: I1208 17:42:22.685763 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:22 crc kubenswrapper[5113]: I1208 17:42:22.685800 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:22 crc kubenswrapper[5113]: E1208 17:42:22.685922 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:22 crc kubenswrapper[5113]: E1208 17:42:22.686078 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:22 crc kubenswrapper[5113]: I1208 17:42:22.686122 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:22 crc kubenswrapper[5113]: E1208 17:42:22.686283 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.512396 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.512985 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.512624 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.513166 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.513199 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.513210 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.513187184 +0000 UTC m=+142.228980290 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.513210 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.513257 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.513251545 +0000 UTC m=+142.229044661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.615353 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.615395 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.615502 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.615561 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.615547298 +0000 UTC m=+142.331340414 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.615759 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.615771 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.615781 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.615811 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.615805024 +0000 UTC m=+142.331598140 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.680998 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.681116 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.681480 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.681723 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.681764 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.681811 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.681935 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.681984 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.683129 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.683344 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.818531 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:24 crc kubenswrapper[5113]: I1208 17:42:24.818629 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.818803 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.818910 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.818858661 +0000 UTC m=+142.534651777 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:24 crc kubenswrapper[5113]: E1208 17:42:24.819077 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs podName:d0a3643f-fbed-4614-a9cb-87b71148c273 nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.819054666 +0000 UTC m=+142.534847932 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs") pod "network-metrics-daemon-bc5j2" (UID: "d0a3643f-fbed-4614-a9cb-87b71148c273") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 17:42:25 crc kubenswrapper[5113]: I1208 17:42:25.343185 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerStarted","Data":"46291567984d2be8f7d3cf07ce2b641dec638c4fa0beecb49dc799e992489a4d"} Dec 08 17:42:26 crc kubenswrapper[5113]: I1208 17:42:26.350111 5113 generic.go:358] "Generic (PLEG): container finished" podID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" containerID="46291567984d2be8f7d3cf07ce2b641dec638c4fa0beecb49dc799e992489a4d" exitCode=0 Dec 08 17:42:26 crc kubenswrapper[5113]: I1208 17:42:26.350180 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerDied","Data":"46291567984d2be8f7d3cf07ce2b641dec638c4fa0beecb49dc799e992489a4d"} Dec 08 17:42:26 crc kubenswrapper[5113]: I1208 17:42:26.793111 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:26 crc kubenswrapper[5113]: E1208 17:42:26.793300 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bc5j2" podUID="d0a3643f-fbed-4614-a9cb-87b71148c273" Dec 08 17:42:26 crc kubenswrapper[5113]: I1208 17:42:26.793719 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:26 crc kubenswrapper[5113]: E1208 17:42:26.795256 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 17:42:26 crc kubenswrapper[5113]: I1208 17:42:26.795737 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:26 crc kubenswrapper[5113]: I1208 17:42:26.796298 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:26 crc kubenswrapper[5113]: E1208 17:42:26.797911 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 17:42:26 crc kubenswrapper[5113]: E1208 17:42:26.798921 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.500187 5113 generic.go:358] "Generic (PLEG): container finished" podID="053be0da-d1f2-46d1-83b1-c9135f5c3c61" containerID="e373a0a31f6158da25d5c130cafbb38afb1fc1e3092ffd37a2b3b8801cb2c225" exitCode=0 Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.501590 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerDied","Data":"e373a0a31f6158da25d5c130cafbb38afb1fc1e3092ffd37a2b3b8801cb2c225"} Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.787260 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.787550 5113 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.835667 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-j9s7b"] Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.858422 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl"] Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.858787 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.861289 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.872353 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-9pm5r"] Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.877196 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.889686 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.889827 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.891620 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.893022 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.893187 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.893746 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.893955 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.893958 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.894211 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.916353 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-d444j"] Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.916550 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.916624 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.924235 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.924464 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.924714 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.925501 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.925597 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.925678 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.925692 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.925748 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.925869 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.926270 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.926399 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.926479 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.934049 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.947447 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-n767g"] Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.965154 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-rddd2"] Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.965762 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.969200 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.969490 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.969698 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.969859 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.970349 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:42:27 crc kubenswrapper[5113]: I1208 17:42:27.972914 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.014419 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-dvf7w"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.016265 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.022153 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.023961 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.024780 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.024835 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4d14e-889b-4611-a96e-02f40133e325-serving-cert\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.024876 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.024907 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdjp\" (UniqueName: \"kubernetes.io/projected/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-kube-api-access-sfdjp\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.024937 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-audit-dir\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.024961 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-trusted-ca-bundle\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025013 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16451d03-de16-4156-8838-9746b4fcd1a9-serving-cert\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025062 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbmph\" (UniqueName: \"kubernetes.io/projected/4ea4d14e-889b-4611-a96e-02f40133e325-kube-api-access-vbmph\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025088 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eef07e86-b52a-4599-8651-fc6852b3e627-machine-approver-tls\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025107 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcw7b\" (UniqueName: \"kubernetes.io/projected/eef07e86-b52a-4599-8651-fc6852b3e627-kube-api-access-pcw7b\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025127 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-image-import-ca\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025152 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-encryption-config\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025177 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025200 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eef07e86-b52a-4599-8651-fc6852b3e627-auth-proxy-config\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025223 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-etcd-serving-ca\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025242 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ea4d14e-889b-4611-a96e-02f40133e325-tmp\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025264 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025293 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-node-pullsecrets\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025322 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef07e86-b52a-4599-8651-fc6852b3e627-config\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025350 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-audit\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025375 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-etcd-client\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025416 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-encryption-config\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025436 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-images\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025460 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025496 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-client-ca\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025539 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-audit-policies\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025571 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9s47\" (UniqueName: \"kubernetes.io/projected/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-kube-api-access-g9s47\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025596 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752jc\" (UniqueName: \"kubernetes.io/projected/16451d03-de16-4156-8838-9746b4fcd1a9-kube-api-access-752jc\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025618 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-serving-cert\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025651 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-serving-cert\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025680 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-etcd-client\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025699 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-config\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025721 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-config\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025754 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749qm\" (UniqueName: \"kubernetes.io/projected/9a9cf80c-c14e-4d96-9887-55bdabc78cec-kube-api-access-749qm\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025775 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a9cf80c-c14e-4d96-9887-55bdabc78cec-audit-dir\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025813 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025842 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-config\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.025863 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-config\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.026935 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.028104 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.028692 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.029011 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.029512 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.029620 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.029838 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.030217 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.040468 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qrdwk"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.040844 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.067273 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.067435 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.067581 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.067842 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.070651 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.073743 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.074746 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.075390 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.075459 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.075582 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.075612 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.078535 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.078736 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.081352 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.084019 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.085769 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.086849 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.086891 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.086999 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.087115 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.087162 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.087985 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.088314 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.088586 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.088748 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.088895 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.089262 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.089292 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.089333 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.089514 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.090092 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.090877 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.090914 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.091603 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.091625 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.105609 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.108287 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.109790 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.111698 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.111888 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.116424 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.116870 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.117056 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.117347 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.117389 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127488 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pcw7b\" (UniqueName: \"kubernetes.io/projected/eef07e86-b52a-4599-8651-fc6852b3e627-kube-api-access-pcw7b\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127570 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9561139d-1882-40e1-bd1e-b45dd921005a-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127602 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-image-import-ca\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127633 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-audit-policies\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127659 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-encryption-config\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127682 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17425c96-b772-49f5-8dca-94501ae13766-tmp\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127708 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127733 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9sc7\" (UniqueName: \"kubernetes.io/projected/17425c96-b772-49f5-8dca-94501ae13766-kube-api-access-k9sc7\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127764 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eef07e86-b52a-4599-8651-fc6852b3e627-auth-proxy-config\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127791 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-serving-cert\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127819 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-etcd-serving-ca\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127844 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ea4d14e-889b-4611-a96e-02f40133e325-tmp\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127871 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127899 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-node-pullsecrets\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127929 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef07e86-b52a-4599-8651-fc6852b3e627-config\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127952 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96a8ac1-0465-4a81-88bf-472026300c81-audit-dir\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.127976 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128003 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-audit\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128030 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-etcd-client\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128141 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17425c96-b772-49f5-8dca-94501ae13766-serving-cert\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128198 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-encryption-config\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128220 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-images\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128269 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfnfm\" (UniqueName: \"kubernetes.io/projected/9561139d-1882-40e1-bd1e-b45dd921005a-kube-api-access-sfnfm\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128309 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128349 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128374 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-client-ca\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128407 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128425 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128446 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-audit-policies\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128464 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9s47\" (UniqueName: \"kubernetes.io/projected/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-kube-api-access-g9s47\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128487 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-752jc\" (UniqueName: \"kubernetes.io/projected/16451d03-de16-4156-8838-9746b4fcd1a9-kube-api-access-752jc\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128503 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128522 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128541 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29hm\" (UniqueName: \"kubernetes.io/projected/c96a8ac1-0465-4a81-88bf-472026300c81-kube-api-access-z29hm\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128557 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxmd6\" (UniqueName: \"kubernetes.io/projected/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-kube-api-access-cxmd6\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128576 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-serving-cert\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128592 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-trusted-ca\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128623 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-serving-cert\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128641 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128665 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-etcd-client\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128684 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-config\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128707 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-config\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128732 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-749qm\" (UniqueName: \"kubernetes.io/projected/9a9cf80c-c14e-4d96-9887-55bdabc78cec-kube-api-access-749qm\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128752 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9561139d-1882-40e1-bd1e-b45dd921005a-config\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128780 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a9cf80c-c14e-4d96-9887-55bdabc78cec-audit-dir\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128806 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128828 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128854 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.128879 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-config\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.129512 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-node-pullsecrets\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.129652 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-config\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.129686 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-config\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.129711 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-client-ca\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.129781 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-config\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.129812 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.130419 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.130503 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.130547 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4d14e-889b-4611-a96e-02f40133e325-serving-cert\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.130580 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.130615 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdjp\" (UniqueName: \"kubernetes.io/projected/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-kube-api-access-sfdjp\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.130786 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.131027 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-config\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.131857 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.132243 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-config\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.132252 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-config\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.132384 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16451d03-de16-4156-8838-9746b4fcd1a9-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.133235 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-audit-dir\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.133378 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-trusted-ca-bundle\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.137895 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-etcd-serving-ca\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.137959 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-client-ca\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.138549 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-audit-policies\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.142070 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.142247 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ea4d14e-889b-4611-a96e-02f40133e325-tmp\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.142292 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-audit\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.142535 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef07e86-b52a-4599-8651-fc6852b3e627-config\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.143340 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a9cf80c-c14e-4d96-9887-55bdabc78cec-trusted-ca-bundle\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.143394 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-audit-dir\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.143493 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16451d03-de16-4156-8838-9746b4fcd1a9-serving-cert\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.143574 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbmph\" (UniqueName: \"kubernetes.io/projected/4ea4d14e-889b-4611-a96e-02f40133e325-kube-api-access-vbmph\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.143666 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eef07e86-b52a-4599-8651-fc6852b3e627-machine-approver-tls\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.144920 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-images\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.145028 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a9cf80c-c14e-4d96-9887-55bdabc78cec-audit-dir\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.146360 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.148460 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-encryption-config\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.148964 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eef07e86-b52a-4599-8651-fc6852b3e627-machine-approver-tls\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.149331 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-image-import-ca\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.152389 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eef07e86-b52a-4599-8651-fc6852b3e627-auth-proxy-config\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.153722 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.154122 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-serving-cert\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.154639 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4d14e-889b-4611-a96e-02f40133e325-serving-cert\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.154974 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-serving-cert\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.156118 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-config\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.160887 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdjp\" (UniqueName: \"kubernetes.io/projected/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-kube-api-access-sfdjp\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.162329 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9a9cf80c-c14e-4d96-9887-55bdabc78cec-etcd-client\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.166457 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16451d03-de16-4156-8838-9746b4fcd1a9-serving-cert\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.167386 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-encryption-config\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.169688 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcw7b\" (UniqueName: \"kubernetes.io/projected/eef07e86-b52a-4599-8651-fc6852b3e627-kube-api-access-pcw7b\") pod \"machine-approver-54c688565-d444j\" (UID: \"eef07e86-b52a-4599-8651-fc6852b3e627\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.170390 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-749qm\" (UniqueName: \"kubernetes.io/projected/9a9cf80c-c14e-4d96-9887-55bdabc78cec-kube-api-access-749qm\") pod \"apiserver-8596bd845d-n767g\" (UID: \"9a9cf80c-c14e-4d96-9887-55bdabc78cec\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.175535 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-752jc\" (UniqueName: \"kubernetes.io/projected/16451d03-de16-4156-8838-9746b4fcd1a9-kube-api-access-752jc\") pod \"authentication-operator-7f5c659b84-lxlpl\" (UID: \"16451d03-de16-4156-8838-9746b4fcd1a9\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.178751 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/da8d1cb5-ad1f-48b7-8208-6b840f893cd5-etcd-client\") pod \"apiserver-9ddfb9f55-j9s7b\" (UID: \"da8d1cb5-ad1f-48b7-8208-6b840f893cd5\") " pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.179864 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9s47\" (UniqueName: \"kubernetes.io/projected/9a01f3ac-05d5-4d04-8699-cc78bbd4df0e-kube-api-access-g9s47\") pod \"machine-api-operator-755bb95488-9pm5r\" (UID: \"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.179875 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbmph\" (UniqueName: \"kubernetes.io/projected/4ea4d14e-889b-4611-a96e-02f40133e325-kube-api-access-vbmph\") pod \"controller-manager-65b6cccf98-rddd2\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.180824 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jmmc5"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.180989 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.183053 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.183437 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.183583 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.183800 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.196883 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.246300 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.246361 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.246382 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-config\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.246401 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-config\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.246447 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-client-ca\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.246474 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.246521 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247606 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9561139d-1882-40e1-bd1e-b45dd921005a-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247688 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-audit-policies\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247721 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17425c96-b772-49f5-8dca-94501ae13766-tmp\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247753 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k9sc7\" (UniqueName: \"kubernetes.io/projected/17425c96-b772-49f5-8dca-94501ae13766-kube-api-access-k9sc7\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247787 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-serving-cert\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247840 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96a8ac1-0465-4a81-88bf-472026300c81-audit-dir\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247866 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247894 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17425c96-b772-49f5-8dca-94501ae13766-serving-cert\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247928 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-config\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.247984 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sfnfm\" (UniqueName: \"kubernetes.io/projected/9561139d-1882-40e1-bd1e-b45dd921005a-kube-api-access-sfnfm\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248019 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb9642d5-438c-4cdb-ab4a-75a72e236fee-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rmn54\" (UID: \"fb9642d5-438c-4cdb-ab4a-75a72e236fee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248087 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248117 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrhv\" (UniqueName: \"kubernetes.io/projected/fb9642d5-438c-4cdb-ab4a-75a72e236fee-kube-api-access-fzrhv\") pod \"cluster-samples-operator-6b564684c8-rmn54\" (UID: \"fb9642d5-438c-4cdb-ab4a-75a72e236fee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248224 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248253 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248289 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248315 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248338 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z29hm\" (UniqueName: \"kubernetes.io/projected/c96a8ac1-0465-4a81-88bf-472026300c81-kube-api-access-z29hm\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248360 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxmd6\" (UniqueName: \"kubernetes.io/projected/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-kube-api-access-cxmd6\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248387 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-trusted-ca\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248425 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.248465 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9561139d-1882-40e1-bd1e-b45dd921005a-config\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.249183 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-audit-policies\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.249617 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-client-ca\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.250720 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.251266 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9561139d-1882-40e1-bd1e-b45dd921005a-config\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.251989 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17425c96-b772-49f5-8dca-94501ae13766-tmp\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.252755 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96a8ac1-0465-4a81-88bf-472026300c81-audit-dir\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.254893 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.258817 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.258874 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-serving-cert\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.258952 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-trusted-ca\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.259135 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.259348 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.259605 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.260307 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.261229 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-config\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.261542 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.263566 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.264711 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.265212 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.265394 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.268133 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17425c96-b772-49f5-8dca-94501ae13766-serving-cert\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.271298 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9561139d-1882-40e1-bd1e-b45dd921005a-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.272396 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.279189 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxmd6\" (UniqueName: \"kubernetes.io/projected/4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91-kube-api-access-cxmd6\") pod \"console-operator-67c89758df-qrdwk\" (UID: \"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91\") " pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.280001 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfnfm\" (UniqueName: \"kubernetes.io/projected/9561139d-1882-40e1-bd1e-b45dd921005a-kube-api-access-sfnfm\") pod \"openshift-apiserver-operator-846cbfc458-kcdmq\" (UID: \"9561139d-1882-40e1-bd1e-b45dd921005a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.279332 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z29hm\" (UniqueName: \"kubernetes.io/projected/c96a8ac1-0465-4a81-88bf-472026300c81-kube-api-access-z29hm\") pod \"oauth-openshift-66458b6674-dvf7w\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.281986 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.286672 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-klln7"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.295064 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9sc7\" (UniqueName: \"kubernetes.io/projected/17425c96-b772-49f5-8dca-94501ae13766-kube-api-access-k9sc7\") pod \"route-controller-manager-776cdc94d6-prc9d\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: W1208 17:42:28.343945 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef07e86_b52a_4599_8651_fc6852b3e627.slice/crio-f53c2471837d5fa6adc9046f749adae3f867c5fa4f49cd8666c68e0304f5bac5 WatchSource:0}: Error finding container f53c2471837d5fa6adc9046f749adae3f867c5fa4f49cd8666c68e0304f5bac5: Status 404 returned error can't find the container with id f53c2471837d5fa6adc9046f749adae3f867c5fa4f49cd8666c68e0304f5bac5 Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.349794 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-available-featuregates\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.349885 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-serving-cert\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.350482 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb9642d5-438c-4cdb-ab4a-75a72e236fee-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rmn54\" (UID: \"fb9642d5-438c-4cdb-ab4a-75a72e236fee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.350545 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqrw\" (UniqueName: \"kubernetes.io/projected/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-kube-api-access-wmqrw\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.350719 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrhv\" (UniqueName: \"kubernetes.io/projected/fb9642d5-438c-4cdb-ab4a-75a72e236fee-kube-api-access-fzrhv\") pod \"cluster-samples-operator-6b564684c8-rmn54\" (UID: \"fb9642d5-438c-4cdb-ab4a-75a72e236fee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.364750 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb9642d5-438c-4cdb-ab4a-75a72e236fee-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-rmn54\" (UID: \"fb9642d5-438c-4cdb-ab4a-75a72e236fee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.369587 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrhv\" (UniqueName: \"kubernetes.io/projected/fb9642d5-438c-4cdb-ab4a-75a72e236fee-kube-api-access-fzrhv\") pod \"cluster-samples-operator-6b564684c8-rmn54\" (UID: \"fb9642d5-438c-4cdb-ab4a-75a72e236fee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.382157 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.390282 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.392012 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-n9p2l"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.392051 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.393203 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.397541 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.398003 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.398599 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.399181 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.399403 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.401232 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.401251 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.406690 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.425271 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.429663 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.451474 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmqrw\" (UniqueName: \"kubernetes.io/projected/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-kube-api-access-wmqrw\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.451575 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spx8c\" (UniqueName: \"kubernetes.io/projected/e5062982-84d6-4c80-8dce-4ab0e3098e96-kube-api-access-spx8c\") pod \"downloads-747b44746d-klln7\" (UID: \"e5062982-84d6-4c80-8dce-4ab0e3098e96\") " pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.451618 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-available-featuregates\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.451655 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-serving-cert\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.456399 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-serving-cert\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.484913 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.766234 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.777234 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.777603 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.779564 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.779692 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.779860 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.780957 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4l6q\" (UniqueName: \"kubernetes.io/projected/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-kube-api-access-n4l6q\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781026 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spx8c\" (UniqueName: \"kubernetes.io/projected/e5062982-84d6-4c80-8dce-4ab0e3098e96-kube-api-access-spx8c\") pod \"downloads-747b44746d-klln7\" (UID: \"e5062982-84d6-4c80-8dce-4ab0e3098e96\") " pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781075 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-oauth-config\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781103 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-oauth-serving-cert\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781137 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-trusted-ca-bundle\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781183 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-serving-cert\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781256 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-config\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781286 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-service-ca\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781354 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.781428 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.785052 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.820568 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-available-featuregates\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.828106 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spx8c\" (UniqueName: \"kubernetes.io/projected/e5062982-84d6-4c80-8dce-4ab0e3098e96-kube-api-access-spx8c\") pod \"downloads-747b44746d-klln7\" (UID: \"e5062982-84d6-4c80-8dce-4ab0e3098e96\") " pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.842608 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-r9xfs"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.869174 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmqrw\" (UniqueName: \"kubernetes.io/projected/95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada-kube-api-access-wmqrw\") pod \"openshift-config-operator-5777786469-jmmc5\" (UID: \"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada\") " pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.881878 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-config\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.881928 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-service-ca\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.881974 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4l6q\" (UniqueName: \"kubernetes.io/projected/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-kube-api-access-n4l6q\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.881994 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-oauth-config\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.882011 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-oauth-serving-cert\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.882050 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-trusted-ca-bundle\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.882599 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-serving-cert\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.886308 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-config\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.892114 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-serving-cert\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.894191 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-console-oauth-config\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.896303 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-service-ca\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.897346 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-trusted-ca-bundle\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.901572 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-oauth-serving-cert\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.909536 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.910519 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.921779 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.922782 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.923815 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.924968 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4l6q\" (UniqueName: \"kubernetes.io/projected/8cf4b24b-8b34-4e71-b8e8-31fb36974b9a-kube-api-access-n4l6q\") pod \"console-64d44f6ddf-n9p2l\" (UID: \"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a\") " pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.928467 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.935101 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.939876 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" event={"ID":"053be0da-d1f2-46d1-83b1-c9135f5c3c61","Type":"ContainerStarted","Data":"f5589e69be5aaa344af9f427fe5ec713753f7d68efe63d227421fc3bc8f5629d"} Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.939960 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-27j5x"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.940875 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.940957 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.941102 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.941485 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.941685 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.941848 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.944128 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.944286 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.944233 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.952614 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx"] Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.957072 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.963472 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:42:28 crc kubenswrapper[5113]: I1208 17:42:28.979199 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.004691 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" event={"ID":"eef07e86-b52a-4599-8651-fc6852b3e627","Type":"ContainerStarted","Data":"f53c2471837d5fa6adc9046f749adae3f867c5fa4f49cd8666c68e0304f5bac5"} Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.004752 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-48lzh"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.005360 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.006061 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.016871 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.019600 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.020638 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.021465 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.022542 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.025281 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.025538 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.026081 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.031605 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.043239 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.048303 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.049683 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.065404 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.082852 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086478 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086520 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cssx4\" (UniqueName: \"kubernetes.io/projected/482cf010-c174-4209-9991-14d3251ee16e-kube-api-access-cssx4\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086556 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/401e85c2-a1e6-4642-80cf-23e461cef995-ca-trust-extracted\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086607 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-registry-tls\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086657 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/482cf010-c174-4209-9991-14d3251ee16e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086684 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6c452e-b1de-4119-acff-d87a7a328bf2-serving-cert\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086703 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e6c452e-b1de-4119-acff-d87a7a328bf2-tmp-dir\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086742 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-config\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086768 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-service-ca\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086793 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/401e85c2-a1e6-4642-80cf-23e461cef995-installation-pull-secrets\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086816 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/482cf010-c174-4209-9991-14d3251ee16e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086866 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/482cf010-c174-4209-9991-14d3251ee16e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086893 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/482cf010-c174-4209-9991-14d3251ee16e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086912 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/482cf010-c174-4209-9991-14d3251ee16e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086948 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-trusted-ca\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086968 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-ca\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.086989 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-bound-sa-token\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.087010 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftj9s\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-kube-api-access-ftj9s\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.087029 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-client\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.087101 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-registry-certificates\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.087149 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qscnp\" (UniqueName: \"kubernetes.io/projected/5e6c452e-b1de-4119-acff-d87a7a328bf2-kube-api-access-qscnp\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.087467 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:29.587455059 +0000 UTC m=+115.303248175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.115494 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.120670 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.154249 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.167773 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.167977 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.168341 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.169177 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.172847 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.176071 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.188630 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189240 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189400 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cssx4\" (UniqueName: \"kubernetes.io/projected/482cf010-c174-4209-9991-14d3251ee16e-kube-api-access-cssx4\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189435 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/401e85c2-a1e6-4642-80cf-23e461cef995-ca-trust-extracted\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189460 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-registry-tls\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189492 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/482cf010-c174-4209-9991-14d3251ee16e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189511 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6c452e-b1de-4119-acff-d87a7a328bf2-serving-cert\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189525 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e6c452e-b1de-4119-acff-d87a7a328bf2-tmp-dir\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189543 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-config\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189560 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-service-ca\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189577 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/401e85c2-a1e6-4642-80cf-23e461cef995-installation-pull-secrets\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189603 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/482cf010-c174-4209-9991-14d3251ee16e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189648 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/482cf010-c174-4209-9991-14d3251ee16e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189664 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/482cf010-c174-4209-9991-14d3251ee16e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189680 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/482cf010-c174-4209-9991-14d3251ee16e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189703 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-trusted-ca\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189719 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-ca\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189736 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-bound-sa-token\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189753 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftj9s\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-kube-api-access-ftj9s\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189772 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-client\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189798 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-registry-certificates\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.189827 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qscnp\" (UniqueName: \"kubernetes.io/projected/5e6c452e-b1de-4119-acff-d87a7a328bf2-kube-api-access-qscnp\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.190594 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/482cf010-c174-4209-9991-14d3251ee16e-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.190822 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/482cf010-c174-4209-9991-14d3251ee16e-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.190936 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:29.690915721 +0000 UTC m=+115.406708837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.191767 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/401e85c2-a1e6-4642-80cf-23e461cef995-ca-trust-extracted\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.194435 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-ca\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.254833 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.254980 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e6c452e-b1de-4119-acff-d87a7a328bf2-tmp-dir\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.257314 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-registry-tls\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.259500 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-registry-certificates\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.259813 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-config\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.260466 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-client\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.261755 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.262287 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.262911 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.263070 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e6c452e-b1de-4119-acff-d87a7a328bf2-etcd-service-ca\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.263374 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.263546 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/401e85c2-a1e6-4642-80cf-23e461cef995-installation-pull-secrets\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.263738 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/482cf010-c174-4209-9991-14d3251ee16e-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.264167 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/482cf010-c174-4209-9991-14d3251ee16e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.265450 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-trusted-ca\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.265699 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.282201 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.292197 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.292746 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:29.792725512 +0000 UTC m=+115.508518628 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.292760 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.293172 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.302188 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bhw9j"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.302652 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.303178 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.320611 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e6c452e-b1de-4119-acff-d87a7a328bf2-serving-cert\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.321712 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.349098 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.362685 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.402294 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.403352 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:29.903306291 +0000 UTC m=+115.619099507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.413707 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.418129 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.419512 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.444616 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-kjgph"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.445899 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.449299 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:42:29 crc kubenswrapper[5113]: W1208 17:42:29.456637 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16451d03_de16_4156_8838_9746b4fcd1a9.slice/crio-0da55ccaa6ca03ad4067a7913a07b69cbf449cfd6cc15d07f5bf0520eb10e21b WatchSource:0}: Error finding container 0da55ccaa6ca03ad4067a7913a07b69cbf449cfd6cc15d07f5bf0520eb10e21b: Status 404 returned error can't find the container with id 0da55ccaa6ca03ad4067a7913a07b69cbf449cfd6cc15d07f5bf0520eb10e21b Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.462222 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dd4zh"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.556528 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.557269 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.057251173 +0000 UTC m=+115.773044299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.571533 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: W1208 17:42:29.572340 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a01f3ac_05d5_4d04_8699_cc78bbd4df0e.slice/crio-c9b34d4e1229ddcbb090ea09c2f18d6a6b9106d266f16002ceffb6c463250ce8 WatchSource:0}: Error finding container c9b34d4e1229ddcbb090ea09c2f18d6a6b9106d266f16002ceffb6c463250ce8: Status 404 returned error can't find the container with id c9b34d4e1229ddcbb090ea09c2f18d6a6b9106d266f16002ceffb6c463250ce8 Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.591644 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.592269 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.592678 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.592989 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.593525 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.595877 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.597855 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.605770 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.606846 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.608268 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.608499 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.641493 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qscnp\" (UniqueName: \"kubernetes.io/projected/5e6c452e-b1de-4119-acff-d87a7a328bf2-kube-api-access-qscnp\") pod \"etcd-operator-69b85846b6-27j5x\" (UID: \"5e6c452e-b1de-4119-acff-d87a7a328bf2\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.658195 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.658388 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.158361376 +0000 UTC m=+115.874154492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.658701 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.659113 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.159099164 +0000 UTC m=+115.874892280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.751924 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cssx4\" (UniqueName: \"kubernetes.io/projected/482cf010-c174-4209-9991-14d3251ee16e-kube-api-access-cssx4\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.752982 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-bound-sa-token\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.754780 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.759165 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.760285 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/482cf010-c174-4209-9991-14d3251ee16e-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fg2zf\" (UID: \"482cf010-c174-4209-9991-14d3251ee16e\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.761595 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.261559731 +0000 UTC m=+115.977352837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.762005 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.767167 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftj9s\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-kube-api-access-ftj9s\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.794681 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.806298 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.827864 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.842294 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.860496 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.860942 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.360924021 +0000 UTC m=+116.076717137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.863749 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.867152 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.888269 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.888764 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.889650 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.896628 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.897131 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.897441 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.901363 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.908855 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.913300 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.924593 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.928557 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.929025 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.941761 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.961512 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.46147307 +0000 UTC m=+116.177266196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.962546 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.962901 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.963246 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:42:29 crc kubenswrapper[5113]: E1208 17:42:29.963453 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.463428629 +0000 UTC m=+116.179221745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.982503 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:42:29 crc kubenswrapper[5113]: W1208 17:42:29.984242 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cf4b24b_8b34_4e71_b8e8_31fb36974b9a.slice/crio-0015c4dd25d96e332cc315b96b8f7f0318bbfd31eccc20207f2973ac3ed58bb7 WatchSource:0}: Error finding container 0015c4dd25d96e332cc315b96b8f7f0318bbfd31eccc20207f2973ac3ed58bb7: Status 404 returned error can't find the container with id 0015c4dd25d96e332cc315b96b8f7f0318bbfd31eccc20207f2973ac3ed58bb7 Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.991812 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r"] Dec 08 17:42:29 crc kubenswrapper[5113]: I1208 17:42:29.991990 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: W1208 17:42:30.002412 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95f7bc1b_b7d2_4096_aa07_fb1ba86b1ada.slice/crio-aa1da7555476c6dd9e9aba3e2804b0f24c772e35d3151d3c3b7149ecb6dea5fc WatchSource:0}: Error finding container aa1da7555476c6dd9e9aba3e2804b0f24c772e35d3151d3c3b7149ecb6dea5fc: Status 404 returned error can't find the container with id aa1da7555476c6dd9e9aba3e2804b0f24c772e35d3151d3c3b7149ecb6dea5fc Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.009732 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.028670 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.033545 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" event={"ID":"c96a8ac1-0465-4a81-88bf-472026300c81","Type":"ContainerStarted","Data":"9cf1f78717f861554ab92d2388b7a87e5af65cd24d7a7ca8b96ec055d649ee96"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.034618 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" event={"ID":"16451d03-de16-4156-8838-9746b4fcd1a9","Type":"ContainerStarted","Data":"0da55ccaa6ca03ad4067a7913a07b69cbf449cfd6cc15d07f5bf0520eb10e21b"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.034643 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" event={"ID":"4ea4d14e-889b-4611-a96e-02f40133e325","Type":"ContainerStarted","Data":"6966b07ba14ba79a94824f69b1645c1bcd5589b86114a7870ce9afe8e27205a1"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.039160 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" event={"ID":"9a9cf80c-c14e-4d96-9887-55bdabc78cec","Type":"ContainerStarted","Data":"80f4b48e0d89ae25ea804cd280f0f955764173036128dfb9e46271d6e4b2b020"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.039252 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" event={"ID":"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e","Type":"ContainerStarted","Data":"c9b34d4e1229ddcbb090ea09c2f18d6a6b9106d266f16002ceffb6c463250ce8"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.039272 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-n9p2l" event={"ID":"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a","Type":"ContainerStarted","Data":"0015c4dd25d96e332cc315b96b8f7f0318bbfd31eccc20207f2973ac3ed58bb7"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.039290 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" event={"ID":"9561139d-1882-40e1-bd1e-b45dd921005a","Type":"ContainerStarted","Data":"1bd484ad61f7f5fa3586078c633c5349eae3823ef710c23bde516137f4dc49f1"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.033776 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.039312 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.041578 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.059857 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" event={"ID":"17425c96-b772-49f5-8dca-94501ae13766","Type":"ContainerStarted","Data":"e9f743ac10b66012b82814167ddc3b2bc4f86a4e42603d59a96ac3d61b206c2f"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.059916 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" event={"ID":"eef07e86-b52a-4599-8651-fc6852b3e627","Type":"ContainerStarted","Data":"cd62af25361147f9234326510df25b4a30c843cdddc9defd7486226403190a2c"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.059966 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x2vwv"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.060212 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.074850 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.075541 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.575514886 +0000 UTC m=+116.291308002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.087182 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" event={"ID":"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91","Type":"ContainerStarted","Data":"01663936aac0fad235f5e7234f28ac2cfda812041d690bc69f7a7b9d2729b036"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.087265 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" event={"ID":"da8d1cb5-ad1f-48b7-8208-6b840f893cd5","Type":"ContainerStarted","Data":"cc1f157dd4b7e554075ea413216bda23784c0132c304162ce1b5bb841d3aa834"} Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.087287 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qrdwk"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.087542 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.088010 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-g2cx2"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.088443 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.090068 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124495 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124572 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-dvf7w"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124589 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124605 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-9pm5r"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124620 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-r9xfs"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124633 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-rddd2"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124646 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jmmc5"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124660 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-n767g"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124671 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-j9s7b"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124685 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124699 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124713 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124725 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124738 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-klln7"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124751 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124770 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-n9p2l"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124782 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124794 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-27j5x"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124806 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124818 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124832 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dd4zh"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124844 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124855 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124868 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124882 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.124895 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-62sbs"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.125927 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.131861 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.144389 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.162865 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.173815 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nfj76"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177540 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bee79ad-69c2-45b0-bc04-e92af1900a27-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177651 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177694 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdrb\" (UniqueName: \"kubernetes.io/projected/6968f785-35be-457b-b97d-99098172ebdd-kube-api-access-6zdrb\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177721 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d67d90-c6bc-475e-891e-a90471f44e71-config\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177761 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e125c503-0c52-41c1-be81-e423204e8348-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177816 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d67d90-c6bc-475e-891e-a90471f44e71-kube-api-access\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177885 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e125c503-0c52-41c1-be81-e423204e8348-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177907 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e125c503-0c52-41c1-be81-e423204e8348-config\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177928 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-tmp-dir\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.177960 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6968f785-35be-457b-b97d-99098172ebdd-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178002 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7bee79ad-69c2-45b0-bc04-e92af1900a27-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178016 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7bee79ad-69c2-45b0-bc04-e92af1900a27-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178138 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbzz\" (UniqueName: \"kubernetes.io/projected/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-kube-api-access-kwbzz\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178170 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvcvr\" (UniqueName: \"kubernetes.io/projected/e125c503-0c52-41c1-be81-e423204e8348-kube-api-access-bvcvr\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178185 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-metrics-tls\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178263 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxspj\" (UniqueName: \"kubernetes.io/projected/7bee79ad-69c2-45b0-bc04-e92af1900a27-kube-api-access-fxspj\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178279 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7d67d90-c6bc-475e-891e-a90471f44e71-tmp-dir\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178294 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6968f785-35be-457b-b97d-99098172ebdd-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.178357 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d67d90-c6bc-475e-891e-a90471f44e71-serving-cert\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.179900 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.67988502 +0000 UTC m=+116.395678136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.184018 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.210971 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.213976 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214073 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bhw9j"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214111 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-48lzh"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214147 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214182 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-g2cx2"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214208 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214236 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2vwv"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214256 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214269 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214282 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214295 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nfj76"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214315 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.214341 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5hq9p"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.216484 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.225728 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.229711 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-gwjhb"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.231194 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248056 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gwjhb"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248174 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248230 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-9pm5r"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248297 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qrdwk"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248313 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248328 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-rddd2"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248347 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-j9s7b"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248363 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-n767g"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248383 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248398 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248411 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-dvf7w"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248424 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248437 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-jmmc5"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248453 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-n9p2l"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248469 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-klln7"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.248568 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.257477 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.262550 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.279928 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280180 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kzjxt\" (UID: \"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.280437 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.780181492 +0000 UTC m=+116.496189533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280527 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280617 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e125c503-0c52-41c1-be81-e423204e8348-config\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280690 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73f7620c-2bcd-4694-abf5-f2b84cefb86b-apiservice-cert\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280733 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnh9\" (UniqueName: \"kubernetes.io/projected/5578ddc6-8840-4d84-abce-93bc621d7aac-kube-api-access-fgnh9\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280761 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjlsx\" (UniqueName: \"kubernetes.io/projected/73f7620c-2bcd-4694-abf5-f2b84cefb86b-kube-api-access-cjlsx\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280788 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jddmm\" (UniqueName: \"kubernetes.io/projected/db5f5b00-b0bf-4fd2-9078-80554270a1b3-kube-api-access-jddmm\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.280813 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/06330ab4-fda1-473e-a461-4091dd3b78e8-tmpfs\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.282625 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e125c503-0c52-41c1-be81-e423204e8348-config\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.283211 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290167 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-metrics-tls\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290409 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fe571182-64c8-4e51-9d95-5777eafe1746-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290460 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5578ddc6-8840-4d84-abce-93bc621d7aac-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290488 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b027df0-f583-455e-a52b-68b4431d5394-config\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290531 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6968f785-35be-457b-b97d-99098172ebdd-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290557 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4wt\" (UniqueName: \"kubernetes.io/projected/1b027df0-f583-455e-a52b-68b4431d5394-kube-api-access-7n4wt\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290575 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5f5b00-b0bf-4fd2-9078-80554270a1b3-signing-key\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.290601 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fxspj\" (UniqueName: \"kubernetes.io/projected/7bee79ad-69c2-45b0-bc04-e92af1900a27-kube-api-access-fxspj\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.291240 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-config-volume\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.291317 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1d1632d-d9ab-4079-b57e-91366b0c2fde-webhook-certs\") pod \"multus-admission-controller-69db94689b-dd4zh\" (UID: \"b1d1632d-d9ab-4079-b57e-91366b0c2fde\") " pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.291933 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d67d90-c6bc-475e-891e-a90471f44e71-serving-cert\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.291994 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-stats-auth\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292082 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-default-certificate\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292114 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1855228c-8af6-4c85-afc8-513b36262cf6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292181 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzb5c\" (UniqueName: \"kubernetes.io/projected/c46cf580-9081-4eac-aee1-1dcd5d7df322-kube-api-access-fzb5c\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292231 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/06330ab4-fda1-473e-a461-4091dd3b78e8-profile-collector-cert\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292260 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h24hr\" (UniqueName: \"kubernetes.io/projected/7a5121ce-5d23-4bc7-925b-645160d834f3-kube-api-access-h24hr\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292510 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292557 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnvj\" (UniqueName: \"kubernetes.io/projected/9d8220da-8458-40d0-b093-c1a70b200985-kube-api-access-klnvj\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292617 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/73f7620c-2bcd-4694-abf5-f2b84cefb86b-tmpfs\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292771 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1855228c-8af6-4c85-afc8-513b36262cf6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.292923 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d67d90-c6bc-475e-891e-a90471f44e71-config\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.293659 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2d2\" (UniqueName: \"kubernetes.io/projected/eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58-kube-api-access-4t2d2\") pod \"package-server-manager-77f986bd66-kzjxt\" (UID: \"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.294186 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.794163751 +0000 UTC m=+116.509957057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.295665 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7d67d90-c6bc-475e-891e-a90471f44e71-config\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.296847 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8220da-8458-40d0-b093-c1a70b200985-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.296918 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-secret-volume\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.296959 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1855228c-8af6-4c85-afc8-513b36262cf6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297010 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1855228c-8af6-4c85-afc8-513b36262cf6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297107 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp9zk\" (UniqueName: \"kubernetes.io/projected/9f20a9b3-632d-44ab-8721-6c512ea15262-kube-api-access-mp9zk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-5l9d7\" (UID: \"9f20a9b3-632d-44ab-8721-6c512ea15262\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297149 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e125c503-0c52-41c1-be81-e423204e8348-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297188 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297213 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297244 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fe571182-64c8-4e51-9d95-5777eafe1746-tmpfs\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297273 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d67d90-c6bc-475e-891e-a90471f44e71-kube-api-access\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297296 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-certs\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297338 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e125c503-0c52-41c1-be81-e423204e8348-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297365 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6968f785-35be-457b-b97d-99098172ebdd-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297393 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.297423 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d8220da-8458-40d0-b093-c1a70b200985-images\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.298054 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e125c503-0c52-41c1-be81-e423204e8348-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.298092 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-metrics-certs\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.298130 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-tmp-dir\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.298634 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b027df0-f583-455e-a52b-68b4431d5394-serving-cert\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.298706 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d824\" (UniqueName: \"kubernetes.io/projected/067b1191-de46-48dc-9922-80c85738d142-kube-api-access-2d824\") pod \"migrator-866fcbc849-9rcgg\" (UID: \"067b1191-de46-48dc-9922-80c85738d142\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.300723 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-metrics-tls\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.300998 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5f5b00-b0bf-4fd2-9078-80554270a1b3-signing-cabundle\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301025 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73f7620c-2bcd-4694-abf5-f2b84cefb86b-webhook-cert\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301067 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301257 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7bee79ad-69c2-45b0-bc04-e92af1900a27-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301406 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7bee79ad-69c2-45b0-bc04-e92af1900a27-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301476 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbzz\" (UniqueName: \"kubernetes.io/projected/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-kube-api-access-kwbzz\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301503 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6968f785-35be-457b-b97d-99098172ebdd-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301538 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bvcvr\" (UniqueName: \"kubernetes.io/projected/e125c503-0c52-41c1-be81-e423204e8348-kube-api-access-bvcvr\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301593 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-service-ca-bundle\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301791 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fe571182-64c8-4e51-9d95-5777eafe1746-srv-cert\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301899 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a5121ce-5d23-4bc7-925b-645160d834f3-metrics-tls\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.301949 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f20a9b3-632d-44ab-8721-6c512ea15262-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-5l9d7\" (UID: \"9f20a9b3-632d-44ab-8721-6c512ea15262\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302000 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dn45\" (UniqueName: \"kubernetes.io/projected/fe571182-64c8-4e51-9d95-5777eafe1746-kube-api-access-7dn45\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302055 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7d67d90-c6bc-475e-891e-a90471f44e71-tmp-dir\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302101 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8nhm\" (UniqueName: \"kubernetes.io/projected/06330ab4-fda1-473e-a461-4091dd3b78e8-kube-api-access-d8nhm\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302134 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bee79ad-69c2-45b0-bc04-e92af1900a27-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302162 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pt7c\" (UniqueName: \"kubernetes.io/projected/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-kube-api-access-7pt7c\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302200 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5578ddc6-8840-4d84-abce-93bc621d7aac-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302224 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5121ce-5d23-4bc7-925b-645160d834f3-config-volume\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302264 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-node-bootstrap-token\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.302861 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6968f785-35be-457b-b97d-99098172ebdd-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.303142 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c46cf580-9081-4eac-aee1-1dcd5d7df322-tmp\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.303260 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.303425 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.303667 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdrb\" (UniqueName: \"kubernetes.io/projected/6968f785-35be-457b-b97d-99098172ebdd-kube-api-access-6zdrb\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.304103 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d8220da-8458-40d0-b093-c1a70b200985-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.304168 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a5121ce-5d23-4bc7-925b-645160d834f3-tmp-dir\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.304201 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2r2\" (UniqueName: \"kubernetes.io/projected/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-kube-api-access-hn2r2\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.304275 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85stx\" (UniqueName: \"kubernetes.io/projected/b1d1632d-d9ab-4079-b57e-91366b0c2fde-kube-api-access-85stx\") pod \"multus-admission-controller-69db94689b-dd4zh\" (UID: \"b1d1632d-d9ab-4079-b57e-91366b0c2fde\") " pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.304348 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz8b5\" (UniqueName: \"kubernetes.io/projected/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-kube-api-access-mz8b5\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.304415 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/06330ab4-fda1-473e-a461-4091dd3b78e8-srv-cert\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.305180 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7d67d90-c6bc-475e-891e-a90471f44e71-serving-cert\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.305222 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-tmp-dir\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.305580 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7d67d90-c6bc-475e-891e-a90471f44e71-tmp-dir\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.305774 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7bee79ad-69c2-45b0-bc04-e92af1900a27-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.306163 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7bee79ad-69c2-45b0-bc04-e92af1900a27-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.311010 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e125c503-0c52-41c1-be81-e423204e8348-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.316534 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-27j5x"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.350582 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.351008 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.459080 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf"] Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.459177 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.459754 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.959697972 +0000 UTC m=+116.675491088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460111 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d8220da-8458-40d0-b093-c1a70b200985-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460165 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a5121ce-5d23-4bc7-925b-645160d834f3-tmp-dir\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460201 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2r2\" (UniqueName: \"kubernetes.io/projected/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-kube-api-access-hn2r2\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460240 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-85stx\" (UniqueName: \"kubernetes.io/projected/b1d1632d-d9ab-4079-b57e-91366b0c2fde-kube-api-access-85stx\") pod \"multus-admission-controller-69db94689b-dd4zh\" (UID: \"b1d1632d-d9ab-4079-b57e-91366b0c2fde\") " pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460273 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mz8b5\" (UniqueName: \"kubernetes.io/projected/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-kube-api-access-mz8b5\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460303 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/06330ab4-fda1-473e-a461-4091dd3b78e8-srv-cert\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460333 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kzjxt\" (UID: \"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460362 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460430 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73f7620c-2bcd-4694-abf5-f2b84cefb86b-apiservice-cert\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460489 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-mountpoint-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460549 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fgnh9\" (UniqueName: \"kubernetes.io/projected/5578ddc6-8840-4d84-abce-93bc621d7aac-kube-api-access-fgnh9\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460574 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cjlsx\" (UniqueName: \"kubernetes.io/projected/73f7620c-2bcd-4694-abf5-f2b84cefb86b-kube-api-access-cjlsx\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460603 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jddmm\" (UniqueName: \"kubernetes.io/projected/db5f5b00-b0bf-4fd2-9078-80554270a1b3-kube-api-access-jddmm\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460639 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/06330ab4-fda1-473e-a461-4091dd3b78e8-tmpfs\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460670 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffa3574d-c847-4258-b8f3-7a044a52f07b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460702 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-socket-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460765 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fe571182-64c8-4e51-9d95-5777eafe1746-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460803 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5578ddc6-8840-4d84-abce-93bc621d7aac-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460829 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b027df0-f583-455e-a52b-68b4431d5394-config\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460855 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4wt\" (UniqueName: \"kubernetes.io/projected/1b027df0-f583-455e-a52b-68b4431d5394-kube-api-access-7n4wt\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460885 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5f5b00-b0bf-4fd2-9078-80554270a1b3-signing-key\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460916 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqg9\" (UniqueName: \"kubernetes.io/projected/ffa3574d-c847-4258-b8f3-7a044a52f07b-kube-api-access-7pqg9\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460958 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-config-volume\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460992 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1d1632d-d9ab-4079-b57e-91366b0c2fde-webhook-certs\") pod \"multus-admission-controller-69db94689b-dd4zh\" (UID: \"b1d1632d-d9ab-4079-b57e-91366b0c2fde\") " pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461028 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-stats-auth\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461083 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-default-certificate\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461112 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1855228c-8af6-4c85-afc8-513b36262cf6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461149 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fzb5c\" (UniqueName: \"kubernetes.io/projected/c46cf580-9081-4eac-aee1-1dcd5d7df322-kube-api-access-fzb5c\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461179 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/06330ab4-fda1-473e-a461-4091dd3b78e8-profile-collector-cert\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461210 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h24hr\" (UniqueName: \"kubernetes.io/projected/7a5121ce-5d23-4bc7-925b-645160d834f3-kube-api-access-h24hr\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461271 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461309 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klnvj\" (UniqueName: \"kubernetes.io/projected/9d8220da-8458-40d0-b093-c1a70b200985-kube-api-access-klnvj\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461345 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6lvm\" (UniqueName: \"kubernetes.io/projected/17535922-286a-4eba-a833-f8feeb9af226-kube-api-access-k6lvm\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461377 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/73f7620c-2bcd-4694-abf5-f2b84cefb86b-tmpfs\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461412 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1855228c-8af6-4c85-afc8-513b36262cf6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461444 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58ec59b3-b3df-4362-8a55-195c1ac13192-cert\") pod \"ingress-canary-gwjhb\" (UID: \"58ec59b3-b3df-4362-8a55-195c1ac13192\") " pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461483 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4t2d2\" (UniqueName: \"kubernetes.io/projected/eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58-kube-api-access-4t2d2\") pod \"package-server-manager-77f986bd66-kzjxt\" (UID: \"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461718 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8220da-8458-40d0-b093-c1a70b200985-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461751 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-secret-volume\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461787 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1855228c-8af6-4c85-afc8-513b36262cf6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461820 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1855228c-8af6-4c85-afc8-513b36262cf6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461854 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mp9zk\" (UniqueName: \"kubernetes.io/projected/9f20a9b3-632d-44ab-8721-6c512ea15262-kube-api-access-mp9zk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-5l9d7\" (UID: \"9f20a9b3-632d-44ab-8721-6c512ea15262\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461888 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461918 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461946 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fe571182-64c8-4e51-9d95-5777eafe1746-tmpfs\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461974 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-certs\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462018 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462065 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d8220da-8458-40d0-b093-c1a70b200985-images\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462105 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-registration-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462128 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-plugins-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462160 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffa3574d-c847-4258-b8f3-7a044a52f07b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462187 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ffa3574d-c847-4258-b8f3-7a044a52f07b-ready\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462212 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-metrics-certs\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462254 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b027df0-f583-455e-a52b-68b4431d5394-serving-cert\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462287 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2d824\" (UniqueName: \"kubernetes.io/projected/067b1191-de46-48dc-9922-80c85738d142-kube-api-access-2d824\") pod \"migrator-866fcbc849-9rcgg\" (UID: \"067b1191-de46-48dc-9922-80c85738d142\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462328 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5f5b00-b0bf-4fd2-9078-80554270a1b3-signing-cabundle\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462360 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73f7620c-2bcd-4694-abf5-f2b84cefb86b-webhook-cert\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462388 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462439 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-service-ca-bundle\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462492 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fe571182-64c8-4e51-9d95-5777eafe1746-srv-cert\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462529 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a5121ce-5d23-4bc7-925b-645160d834f3-metrics-tls\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462576 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f20a9b3-632d-44ab-8721-6c512ea15262-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-5l9d7\" (UID: \"9f20a9b3-632d-44ab-8721-6c512ea15262\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.460203 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462636 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dn45\" (UniqueName: \"kubernetes.io/projected/fe571182-64c8-4e51-9d95-5777eafe1746-kube-api-access-7dn45\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462675 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5szvj\" (UniqueName: \"kubernetes.io/projected/58ec59b3-b3df-4362-8a55-195c1ac13192-kube-api-access-5szvj\") pod \"ingress-canary-gwjhb\" (UID: \"58ec59b3-b3df-4362-8a55-195c1ac13192\") " pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462748 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8nhm\" (UniqueName: \"kubernetes.io/projected/06330ab4-fda1-473e-a461-4091dd3b78e8-kube-api-access-d8nhm\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462792 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7pt7c\" (UniqueName: \"kubernetes.io/projected/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-kube-api-access-7pt7c\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462822 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-csi-data-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462865 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5578ddc6-8840-4d84-abce-93bc621d7aac-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462896 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5121ce-5d23-4bc7-925b-645160d834f3-config-volume\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462940 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-node-bootstrap-token\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.462969 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c46cf580-9081-4eac-aee1-1dcd5d7df322-tmp\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.463014 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.463841 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.463893 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7a5121ce-5d23-4bc7-925b-645160d834f3-tmp-dir\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.465678 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/06330ab4-fda1-473e-a461-4091dd3b78e8-tmpfs\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.468049 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.468480 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.471260 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-config\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.471876 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:30.971847585 +0000 UTC m=+116.687640711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.472209 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-stats-auth\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.461312 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.473638 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.475159 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d8220da-8458-40d0-b093-c1a70b200985-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.475165 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d8220da-8458-40d0-b093-c1a70b200985-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.475805 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/73f7620c-2bcd-4694-abf5-f2b84cefb86b-tmpfs\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.476994 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1d1632d-d9ab-4079-b57e-91366b0c2fde-webhook-certs\") pod \"multus-admission-controller-69db94689b-dd4zh\" (UID: \"b1d1632d-d9ab-4079-b57e-91366b0c2fde\") " pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.477343 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1855228c-8af6-4c85-afc8-513b36262cf6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.477479 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fe571182-64c8-4e51-9d95-5777eafe1746-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.477798 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5578ddc6-8840-4d84-abce-93bc621d7aac-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.477805 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c46cf580-9081-4eac-aee1-1dcd5d7df322-tmp\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.478500 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-default-certificate\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.478991 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1855228c-8af6-4c85-afc8-513b36262cf6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.479369 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d8220da-8458-40d0-b093-c1a70b200985-images\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.479381 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fe571182-64c8-4e51-9d95-5777eafe1746-tmpfs\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.480123 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-metrics-certs\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.480167 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-service-ca-bundle\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.480322 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1855228c-8af6-4c85-afc8-513b36262cf6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.480436 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.481906 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.483750 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.484149 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/06330ab4-fda1-473e-a461-4091dd3b78e8-profile-collector-cert\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.484219 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-secret-volume\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.484321 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/06330ab4-fda1-473e-a461-4091dd3b78e8-srv-cert\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.484837 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.485447 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5578ddc6-8840-4d84-abce-93bc621d7aac-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.493500 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9f20a9b3-632d-44ab-8721-6c512ea15262-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-5l9d7\" (UID: \"9f20a9b3-632d-44ab-8721-6c512ea15262\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.503326 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.514368 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fe571182-64c8-4e51-9d95-5777eafe1746-srv-cert\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.522092 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.541434 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.548312 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-kzjxt\" (UID: \"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.562223 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.564229 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.564452 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.064409845 +0000 UTC m=+116.780202971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.564631 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-mountpoint-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.564719 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffa3574d-c847-4258-b8f3-7a044a52f07b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.564789 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-socket-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.564906 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqg9\" (UniqueName: \"kubernetes.io/projected/ffa3574d-c847-4258-b8f3-7a044a52f07b-kube-api-access-7pqg9\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565067 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565093 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k6lvm\" (UniqueName: \"kubernetes.io/projected/17535922-286a-4eba-a833-f8feeb9af226-kube-api-access-k6lvm\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565155 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58ec59b3-b3df-4362-8a55-195c1ac13192-cert\") pod \"ingress-canary-gwjhb\" (UID: \"58ec59b3-b3df-4362-8a55-195c1ac13192\") " pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565260 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-registration-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565298 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-plugins-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565316 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffa3574d-c847-4258-b8f3-7a044a52f07b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565333 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ffa3574d-c847-4258-b8f3-7a044a52f07b-ready\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565459 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5szvj\" (UniqueName: \"kubernetes.io/projected/58ec59b3-b3df-4362-8a55-195c1ac13192-kube-api-access-5szvj\") pod \"ingress-canary-gwjhb\" (UID: \"58ec59b3-b3df-4362-8a55-195c1ac13192\") " pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565505 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-csi-data-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.565786 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.065766968 +0000 UTC m=+116.781560084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.565834 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-csi-data-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.566330 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-mountpoint-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.566631 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-registration-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.566720 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-socket-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.566785 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffa3574d-c847-4258-b8f3-7a044a52f07b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.566721 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/17535922-286a-4eba-a833-f8feeb9af226-plugins-dir\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.567680 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ffa3574d-c847-4258-b8f3-7a044a52f07b-ready\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.571921 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73f7620c-2bcd-4694-abf5-f2b84cefb86b-webhook-cert\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.581395 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73f7620c-2bcd-4694-abf5-f2b84cefb86b-apiservice-cert\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.581514 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.606490 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.627751 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.634532 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b027df0-f583-455e-a52b-68b4431d5394-serving-cert\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.642084 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.649822 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b027df0-f583-455e-a52b-68b4431d5394-config\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.661897 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.666801 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.666978 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.166945533 +0000 UTC m=+116.882738639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.667109 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.667706 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.167696062 +0000 UTC m=+116.883489178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.681818 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.707327 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.722027 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.729644 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a5121ce-5d23-4bc7-925b-645160d834f3-config-volume\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.741524 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.761639 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.768920 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.769456 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.269432681 +0000 UTC m=+116.985225797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.769819 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.770192 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.270184379 +0000 UTC m=+116.985977485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.781275 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.788695 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5f5b00-b0bf-4fd2-9078-80554270a1b3-signing-cabundle\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.795694 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-config-volume\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.798427 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a5121ce-5d23-4bc7-925b-645160d834f3-metrics-tls\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.813481 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.829821 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5f5b00-b0bf-4fd2-9078-80554270a1b3-signing-key\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.833505 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.856285 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.872171 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.873384 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.373354004 +0000 UTC m=+117.089147130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.881709 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.881977 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.902505 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.925561 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.942312 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.947837 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffa3574d-c847-4258-b8f3-7a044a52f07b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.963759 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.973472 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-certs\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.981442 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:30 crc kubenswrapper[5113]: E1208 17:42:30.981829 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.481814511 +0000 UTC m=+117.197607637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:30 crc kubenswrapper[5113]: I1208 17:42:30.984169 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.002930 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.021808 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.033787 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-node-bootstrap-token\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.041345 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.061583 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.082214 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:31 crc kubenswrapper[5113]: E1208 17:42:31.082828 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.582811111 +0000 UTC m=+117.298604227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.085541 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.155615 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58ec59b3-b3df-4362-8a55-195c1ac13192-cert\") pod \"ingress-canary-gwjhb\" (UID: \"58ec59b3-b3df-4362-8a55-195c1ac13192\") " pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.246772 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:31 crc kubenswrapper[5113]: E1208 17:42:31.247236 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.747216523 +0000 UTC m=+117.463009639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.266343 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" event={"ID":"4ea4d14e-889b-4611-a96e-02f40133e325","Type":"ContainerStarted","Data":"b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.268502 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.271942 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxspj\" (UniqueName: \"kubernetes.io/projected/7bee79ad-69c2-45b0-bc04-e92af1900a27-kube-api-access-fxspj\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.272173 5113 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-rddd2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.272234 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.280223 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbzz\" (UniqueName: \"kubernetes.io/projected/b5bc0b8b-b537-4cae-8cc9-970eba4e8b44-kube-api-access-kwbzz\") pod \"dns-operator-799b87ffcd-48lzh\" (UID: \"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.284432 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7bee79ad-69c2-45b0-bc04-e92af1900a27-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-bs7z2\" (UID: \"7bee79ad-69c2-45b0-bc04-e92af1900a27\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.288836 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdrb\" (UniqueName: \"kubernetes.io/projected/6968f785-35be-457b-b97d-99098172ebdd-kube-api-access-6zdrb\") pod \"kube-storage-version-migrator-operator-565b79b866-6mtlt\" (UID: \"6968f785-35be-457b-b97d-99098172ebdd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.289092 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7d67d90-c6bc-475e-891e-a90471f44e71-kube-api-access\") pod \"kube-apiserver-operator-575994946d-d2l67\" (UID: \"f7d67d90-c6bc-475e-891e-a90471f44e71\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.290736 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz8b5\" (UniqueName: \"kubernetes.io/projected/be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb-kube-api-access-mz8b5\") pod \"router-default-68cf44c8b8-kjgph\" (UID: \"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb\") " pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.292584 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f56553d9-63c5-47e4-baf9-9b3cfdf8c75f-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-7g2c4\" (UID: \"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.293702 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" event={"ID":"9561139d-1882-40e1-bd1e-b45dd921005a","Type":"ContainerStarted","Data":"8b953cfe87457b7fe26d05a85e6f3283337d30f501d818d60817915426b104bd"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.293876 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-85stx\" (UniqueName: \"kubernetes.io/projected/b1d1632d-d9ab-4079-b57e-91366b0c2fde-kube-api-access-85stx\") pod \"multus-admission-controller-69db94689b-dd4zh\" (UID: \"b1d1632d-d9ab-4079-b57e-91366b0c2fde\") " pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.294004 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.298397 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerStarted","Data":"d9d477085e3401b43fbe32e2281560ce83dde15a88171f0fc7501cbb9de48b6d"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.298682 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvcvr\" (UniqueName: \"kubernetes.io/projected/e125c503-0c52-41c1-be81-e423204e8348-kube-api-access-bvcvr\") pod \"openshift-controller-manager-operator-686468bdd5-8dmhx\" (UID: \"e125c503-0c52-41c1-be81-e423204e8348\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.300065 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" event={"ID":"fb9642d5-438c-4cdb-ab4a-75a72e236fee","Type":"ContainerStarted","Data":"b52ec44fb6825e7014549d04055eddc3e21ba10c00390ccde96f04865627801a"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.308085 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" event={"ID":"eef07e86-b52a-4599-8651-fc6852b3e627","Type":"ContainerStarted","Data":"e3374e756fe5ff3f531a30f0723cf9ca08b70028d8397ddb612d9fd57f76fe4e"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.310461 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" event={"ID":"482cf010-c174-4209-9991-14d3251ee16e","Type":"ContainerStarted","Data":"88b6d82346fd3fe70909594b1e8c75910ab89a1cd2a1bd17ca76d8a22b3213bc"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.312009 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2r2\" (UniqueName: \"kubernetes.io/projected/611ac9b7-f05d-4755-bfba-3f54b1cbb7af-kube-api-access-hn2r2\") pod \"machine-config-server-62sbs\" (UID: \"611ac9b7-f05d-4755-bfba-3f54b1cbb7af\") " pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.312583 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" event={"ID":"16451d03-de16-4156-8838-9746b4fcd1a9","Type":"ContainerStarted","Data":"93e401cd31ecb63e8ba6d3dfd6949ca71c0c499888b2166196816c0e429eebdd"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.314703 5113 generic.go:358] "Generic (PLEG): container finished" podID="9a9cf80c-c14e-4d96-9887-55bdabc78cec" containerID="21d35087a7e3cae0ea89be7ebfa91f44ab5c6133540f3a22c221ceb3232923ea" exitCode=0 Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.314758 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" event={"ID":"9a9cf80c-c14e-4d96-9887-55bdabc78cec","Type":"ContainerDied","Data":"21d35087a7e3cae0ea89be7ebfa91f44ab5c6133540f3a22c221ceb3232923ea"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.318120 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" event={"ID":"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e","Type":"ContainerStarted","Data":"19ab9d8db982967e54b28898c1a50e8c8b738cdfc9f4bf93e6d3bfea01b3a632"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.324510 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-n9p2l" event={"ID":"8cf4b24b-8b34-4e71-b8e8-31fb36974b9a","Type":"ContainerStarted","Data":"3abed86e33f4d674c0ec18bc9ac4fc4deb4cf8bad2cb6a7c0f59be2f3142fff7"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.327582 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjlsx\" (UniqueName: \"kubernetes.io/projected/73f7620c-2bcd-4694-abf5-f2b84cefb86b-kube-api-access-cjlsx\") pod \"packageserver-7d4fc7d867-xbdkt\" (UID: \"73f7620c-2bcd-4694-abf5-f2b84cefb86b\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.328686 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" event={"ID":"17425c96-b772-49f5-8dca-94501ae13766","Type":"ContainerStarted","Data":"0dd538565877321c39023adc4ffe8860e82713adbb30fbc59eb10dc32a4bfb10"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.329359 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.330205 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" event={"ID":"4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91","Type":"ContainerStarted","Data":"824f2bc4388f1d054ef6432b49527499a13318108f233a9ef2722662e1ddaa7a"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.330698 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.332570 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" event={"ID":"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada","Type":"ContainerStarted","Data":"5160cad53b8f16131eb0c509a3f78b3a07d6b6f55064afc5647545d87ce5c0f0"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.332605 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" event={"ID":"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada","Type":"ContainerStarted","Data":"aa1da7555476c6dd9e9aba3e2804b0f24c772e35d3151d3c3b7149ecb6dea5fc"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.333861 5113 patch_prober.go:28] interesting pod/console-operator-67c89758df-qrdwk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.333918 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" podUID="4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.334320 5113 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-prc9d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.334356 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" podUID="17425c96-b772-49f5-8dca-94501ae13766" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.334818 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" event={"ID":"5e6c452e-b1de-4119-acff-d87a7a328bf2","Type":"ContainerStarted","Data":"b668965aedccda10849a7d7fb51a7e82ceae76319f9ae257e5b408e54748b5ef"} Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.340176 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgnh9\" (UniqueName: \"kubernetes.io/projected/5578ddc6-8840-4d84-abce-93bc621d7aac-kube-api-access-fgnh9\") pod \"machine-config-controller-f9cdd68f7-5w66m\" (UID: \"5578ddc6-8840-4d84-abce-93bc621d7aac\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.354184 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:31 crc kubenswrapper[5113]: E1208 17:42:31.354323 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.854286345 +0000 UTC m=+117.570079461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.356632 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:31 crc kubenswrapper[5113]: E1208 17:42:31.358624 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.858597553 +0000 UTC m=+117.574390859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.366336 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4wt\" (UniqueName: \"kubernetes.io/projected/1b027df0-f583-455e-a52b-68b4431d5394-kube-api-access-7n4wt\") pod \"service-ca-operator-5b9c976747-jw54r\" (UID: \"1b027df0-f583-455e-a52b-68b4431d5394\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.369752 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.374277 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.407119 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jddmm\" (UniqueName: \"kubernetes.io/projected/db5f5b00-b0bf-4fd2-9078-80554270a1b3-kube-api-access-jddmm\") pod \"service-ca-74545575db-g2cx2\" (UID: \"db5f5b00-b0bf-4fd2-9078-80554270a1b3\") " pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.412437 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.430680 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.431460 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.431543 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.431547 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzb5c\" (UniqueName: \"kubernetes.io/projected/c46cf580-9081-4eac-aee1-1dcd5d7df322-kube-api-access-fzb5c\") pod \"marketplace-operator-547dbd544d-bhw9j\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.436988 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h24hr\" (UniqueName: \"kubernetes.io/projected/7a5121ce-5d23-4bc7-925b-645160d834f3-kube-api-access-h24hr\") pod \"dns-default-x2vwv\" (UID: \"7a5121ce-5d23-4bc7-925b-645160d834f3\") " pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.466863 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:31 crc kubenswrapper[5113]: E1208 17:42:31.467453 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:31.967432599 +0000 UTC m=+117.683225715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.472712 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klnvj\" (UniqueName: \"kubernetes.io/projected/9d8220da-8458-40d0-b093-c1a70b200985-kube-api-access-klnvj\") pod \"machine-config-operator-67c9d58cbb-749v6\" (UID: \"9d8220da-8458-40d0-b093-c1a70b200985\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.481262 5113 request.go:752] "Waited before sending request" delay="1.004631879s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/collect-profiles/token" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.489354 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8nhm\" (UniqueName: \"kubernetes.io/projected/06330ab4-fda1-473e-a461-4091dd3b78e8-kube-api-access-d8nhm\") pod \"olm-operator-5cdf44d969-n5msr\" (UID: \"06330ab4-fda1-473e-a461-4091dd3b78e8\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.499443 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pt7c\" (UniqueName: \"kubernetes.io/projected/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-kube-api-access-7pt7c\") pod \"collect-profiles-29420250-w8vp7\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.503955 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.510888 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.536639 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t2d2\" (UniqueName: \"kubernetes.io/projected/eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58-kube-api-access-4t2d2\") pod \"package-server-manager-77f986bd66-kzjxt\" (UID: \"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.545330 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.557369 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp9zk\" (UniqueName: \"kubernetes.io/projected/9f20a9b3-632d-44ab-8721-6c512ea15262-kube-api-access-mp9zk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-5l9d7\" (UID: \"9f20a9b3-632d-44ab-8721-6c512ea15262\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.569946 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-g2cx2" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.570368 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:31 crc kubenswrapper[5113]: E1208 17:42:31.570770 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.070755397 +0000 UTC m=+117.786548513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.582846 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d824\" (UniqueName: \"kubernetes.io/projected/067b1191-de46-48dc-9922-80c85738d142-kube-api-access-2d824\") pod \"migrator-866fcbc849-9rcgg\" (UID: \"067b1191-de46-48dc-9922-80c85738d142\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.703512 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.714266 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.714569 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.714856 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.715344 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.715653 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.716781 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:31 crc kubenswrapper[5113]: E1208 17:42:31.717505 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.217489849 +0000 UTC m=+117.933282965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.720482 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.723750 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1855228c-8af6-4c85-afc8-513b36262cf6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-6f6vr\" (UID: \"1855228c-8af6-4c85-afc8-513b36262cf6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.739955 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dn45\" (UniqueName: \"kubernetes.io/projected/fe571182-64c8-4e51-9d95-5777eafe1746-kube-api-access-7dn45\") pod \"catalog-operator-75ff9f647d-vmlfn\" (UID: \"fe571182-64c8-4e51-9d95-5777eafe1746\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.799003 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5szvj\" (UniqueName: \"kubernetes.io/projected/58ec59b3-b3df-4362-8a55-195c1ac13192-kube-api-access-5szvj\") pod \"ingress-canary-gwjhb\" (UID: \"58ec59b3-b3df-4362-8a55-195c1ac13192\") " pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.799756 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.800550 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:31 crc kubenswrapper[5113]: I1208 17:42:31.800858 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:31.854444 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:31.855196 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.355164864 +0000 UTC m=+118.070957980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:31.887761 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rzvvg" podStartSLOduration=94.887737047 podStartE2EDuration="1m34.887737047s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:31.887336727 +0000 UTC m=+117.603129853" watchObservedRunningTime="2025-12-08 17:42:31.887737047 +0000 UTC m=+117.603530163" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:31.938634 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:31.955602 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:31.956570 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.456547344 +0000 UTC m=+118.172340450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:31.999486 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" podStartSLOduration=94.999463625 podStartE2EDuration="1m34.999463625s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:31.998735857 +0000 UTC m=+117.714528993" watchObservedRunningTime="2025-12-08 17:42:31.999463625 +0000 UTC m=+117.715256751" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.026400 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6lvm\" (UniqueName: \"kubernetes.io/projected/17535922-286a-4eba-a833-f8feeb9af226-kube-api-access-k6lvm\") pod \"csi-hostpathplugin-nfj76\" (UID: \"17535922-286a-4eba-a833-f8feeb9af226\") " pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.030885 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" podStartSLOduration=95.030863939 podStartE2EDuration="1m35.030863939s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:32.019829853 +0000 UTC m=+117.735622969" watchObservedRunningTime="2025-12-08 17:42:32.030863939 +0000 UTC m=+117.746657055" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.038565 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqg9\" (UniqueName: \"kubernetes.io/projected/ffa3574d-c847-4258-b8f3-7a044a52f07b-kube-api-access-7pqg9\") pod \"cni-sysctl-allowlist-ds-5hq9p\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.061097 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.061513 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.561501384 +0000 UTC m=+118.277294500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.202833 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.203625 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.703595069 +0000 UTC m=+118.419388195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.214805 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.327931 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.328718 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.828701931 +0000 UTC m=+118.544495047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.429071 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.429538 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.929496176 +0000 UTC m=+118.645289292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.430309 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.430757 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:32.930750298 +0000 UTC m=+118.646543414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.447177 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.497170 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-62sbs" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.506614 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" event={"ID":"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb","Type":"ContainerStarted","Data":"a145ef5377ae85799b8fa9de886db0f16bb3bb19d318e736b63f735e4c7937d3"} Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.519098 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-gwjhb" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.531419 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.532962 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:33.032937578 +0000 UTC m=+118.748730694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.586064 5113 generic.go:358] "Generic (PLEG): container finished" podID="95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada" containerID="5160cad53b8f16131eb0c509a3f78b3a07d6b6f55064afc5647545d87ce5c0f0" exitCode=0 Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.587242 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" event={"ID":"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada","Type":"ContainerDied","Data":"5160cad53b8f16131eb0c509a3f78b3a07d6b6f55064afc5647545d87ce5c0f0"} Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.665087 5113 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-rddd2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.665184 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.666698 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.667146 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:33.167129997 +0000 UTC m=+118.882923103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.685108 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-kcdmq" podStartSLOduration=95.685086955 podStartE2EDuration="1m35.685086955s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:32.683903475 +0000 UTC m=+118.399696601" watchObservedRunningTime="2025-12-08 17:42:32.685086955 +0000 UTC m=+118.400880071" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.790853 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.791251 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:33.291229154 +0000 UTC m=+119.007022270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.908807 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:32 crc kubenswrapper[5113]: E1208 17:42:32.909574 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:33.409554136 +0000 UTC m=+119.125347242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.950575 5113 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-prc9d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.950693 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" podUID="17425c96-b772-49f5-8dca-94501ae13766" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 08 17:42:32 crc kubenswrapper[5113]: I1208 17:42:32.972963 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" podStartSLOduration=95.972907107 podStartE2EDuration="1m35.972907107s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:32.972794644 +0000 UTC m=+118.688587770" watchObservedRunningTime="2025-12-08 17:42:32.972907107 +0000 UTC m=+118.688700223" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.006528 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-lxlpl" podStartSLOduration=96.006487515 podStartE2EDuration="1m36.006487515s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:33.003976862 +0000 UTC m=+118.719769988" watchObservedRunningTime="2025-12-08 17:42:33.006487515 +0000 UTC m=+118.722280641" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.011084 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.011734 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:33.511713095 +0000 UTC m=+119.227506211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.113011 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.113552 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:33.613534366 +0000 UTC m=+119.329327482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.218898 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.220428 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:33.720405793 +0000 UTC m=+119.436198909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.594660 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.595626 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.095606566 +0000 UTC m=+119.811399682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.688825 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" event={"ID":"5e6c452e-b1de-4119-acff-d87a7a328bf2","Type":"ContainerStarted","Data":"36e5792c21287bf91a8c186388b78a3b9e279c64fc25b62ec284093226c3d85b"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.699264 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.699515 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.199492638 +0000 UTC m=+119.915285754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.699802 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.701737 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.201715014 +0000 UTC m=+119.917508200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.717099 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerStarted","Data":"2e9ff9256516366dec5700efec9aa9ea1b5a5e334cabc75b15009630a9f7f12f"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.717676 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.720934 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" event={"ID":"fb9642d5-438c-4cdb-ab4a-75a72e236fee","Type":"ContainerStarted","Data":"4b8b289d8bc8515f056d89cb8c84a8741189ed3c2d2ad284fa5efce09d74cab3"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.722083 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" event={"ID":"c96a8ac1-0465-4a81-88bf-472026300c81","Type":"ContainerStarted","Data":"c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.726295 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.726686 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.726749 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.736531 5113 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-dvf7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.736617 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.753276 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" event={"ID":"482cf010-c174-4209-9991-14d3251ee16e","Type":"ContainerStarted","Data":"a1a7543b74af51e786dcfe09a748f6561e8a1e365b3f22c530aaad1e6fac317d"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.789450 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" event={"ID":"ffa3574d-c847-4258-b8f3-7a044a52f07b","Type":"ContainerStarted","Data":"2c387b40bae698aaff13c1e452657f39a9600cfa7c0a3bb1854915bacbf0bae7"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.800989 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.801841 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.301811371 +0000 UTC m=+120.017604487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.802249 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.811371 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.311295987 +0000 UTC m=+120.027089523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.894219 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-d444j" podStartSLOduration=96.894197646 podStartE2EDuration="1m36.894197646s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:33.891525699 +0000 UTC m=+119.607318825" watchObservedRunningTime="2025-12-08 17:42:33.894197646 +0000 UTC m=+119.609990772" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.894615 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" event={"ID":"9a01f3ac-05d5-4d04-8699-cc78bbd4df0e","Type":"ContainerStarted","Data":"fa569de1905c77fb7b942f11b83c46e65c50f798c9358300cb991bf5749adba9"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.896723 5113 generic.go:358] "Generic (PLEG): container finished" podID="da8d1cb5-ad1f-48b7-8208-6b840f893cd5" containerID="7737d45ecdf9f923b7b2d77b58153c055084400bee6f559af06e458ed846e897" exitCode=0 Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.896785 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" event={"ID":"da8d1cb5-ad1f-48b7-8208-6b840f893cd5","Type":"ContainerDied","Data":"7737d45ecdf9f923b7b2d77b58153c055084400bee6f559af06e458ed846e897"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.904115 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:33 crc kubenswrapper[5113]: E1208 17:42:33.905496 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.405462517 +0000 UTC m=+120.121255633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.931442 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-62sbs" event={"ID":"611ac9b7-f05d-4755-bfba-3f54b1cbb7af","Type":"ContainerStarted","Data":"cabfc3966081f1b1033aa8f15f706a7a13b9ba3ef3f265b5a02c920869b3ff94"} Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.932411 5113 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-rddd2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.932485 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.933110 5113 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-prc9d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 08 17:42:33 crc kubenswrapper[5113]: I1208 17:42:33.933139 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" podUID="17425c96-b772-49f5-8dca-94501ae13766" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.006252 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.007069 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.507051622 +0000 UTC m=+120.222844738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.042114 5113 patch_prober.go:28] interesting pod/console-operator-67c89758df-qrdwk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.042193 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" podUID="4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.172240 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.172929 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.672909441 +0000 UTC m=+120.388702557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.173105 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.176623 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-n9p2l" podStartSLOduration=97.176611403 podStartE2EDuration="1m37.176611403s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:33.931561158 +0000 UTC m=+119.647354284" watchObservedRunningTime="2025-12-08 17:42:34.176611403 +0000 UTC m=+119.892404519" Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.178636 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.678615043 +0000 UTC m=+120.394408209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.268585 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-klln7" podStartSLOduration=97.268559758 podStartE2EDuration="1m37.268559758s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:34.263726337 +0000 UTC m=+119.979519473" watchObservedRunningTime="2025-12-08 17:42:34.268559758 +0000 UTC m=+119.984352874" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.274687 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.275415 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.775390608 +0000 UTC m=+120.491183724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.275524 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.276700 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.77666779 +0000 UTC m=+120.492460906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.376782 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.377418 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:34.877389824 +0000 UTC m=+120.593182940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.475639 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-9pm5r" podStartSLOduration=97.475606735 podStartE2EDuration="1m37.475606735s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:34.398200703 +0000 UTC m=+120.113993909" watchObservedRunningTime="2025-12-08 17:42:34.475606735 +0000 UTC m=+120.191399851" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.508631 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.508979 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.008964817 +0000 UTC m=+120.724757933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.514962 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-27j5x" podStartSLOduration=97.514923356 podStartE2EDuration="1m37.514923356s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:34.482344313 +0000 UTC m=+120.198137439" watchObservedRunningTime="2025-12-08 17:42:34.514923356 +0000 UTC m=+120.230716482" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.588661 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fg2zf" podStartSLOduration=97.588644806 podStartE2EDuration="1m37.588644806s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:34.588178764 +0000 UTC m=+120.303971900" watchObservedRunningTime="2025-12-08 17:42:34.588644806 +0000 UTC m=+120.304437922" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.589207 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" podStartSLOduration=97.58920075 podStartE2EDuration="1m37.58920075s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:34.570998055 +0000 UTC m=+120.286791191" watchObservedRunningTime="2025-12-08 17:42:34.58920075 +0000 UTC m=+120.304993856" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.611435 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.612172 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.112150362 +0000 UTC m=+120.827943478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.805151 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.805625 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.30560822 +0000 UTC m=+121.021401336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.908590 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:34 crc kubenswrapper[5113]: E1208 17:42:34.909067 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.40901992 +0000 UTC m=+121.124813036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.935184 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.935270 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.935416 5113 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-dvf7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Dec 08 17:42:34 crc kubenswrapper[5113]: I1208 17:42:34.935481 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.011903 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.014666 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.514643316 +0000 UTC m=+121.230436482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.043641 5113 patch_prober.go:28] interesting pod/console-operator-67c89758df-qrdwk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.043747 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" podUID="4c28b7b9-34aa-46fe-b1dc-7f8209e5aa91" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.081575 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" event={"ID":"9a9cf80c-c14e-4d96-9887-55bdabc78cec","Type":"ContainerStarted","Data":"79e3ae4d830641568b7502e550577054c806d7258e0aad5a67fd24f3747173e9"} Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.115994 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.116227 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.6161947 +0000 UTC m=+121.331987816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.116928 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.117262 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.617249626 +0000 UTC m=+121.333042742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.220204 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.221026 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.721001096 +0000 UTC m=+121.436794212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.322410 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.323136 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.823119384 +0000 UTC m=+121.538912500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.427906 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.428837 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:35.928815601 +0000 UTC m=+121.644608717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.531394 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.532001 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.031976126 +0000 UTC m=+121.747769312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.632798 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.632978 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.132948636 +0000 UTC m=+121.848741752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.633165 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.633625 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.133609482 +0000 UTC m=+121.849402598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.773309 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.773492 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.273457262 +0000 UTC m=+121.989250398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.773829 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.774256 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.274245591 +0000 UTC m=+121.990038707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.874765 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.874881 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.374855152 +0000 UTC m=+122.090648278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.875106 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.875557 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.375536569 +0000 UTC m=+122.091329685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.961431 5113 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-dvf7w container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.961499 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.975730 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.976083 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.476001986 +0000 UTC m=+122.191795102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:35 crc kubenswrapper[5113]: I1208 17:42:35.976704 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:35 crc kubenswrapper[5113]: E1208 17:42:35.977502 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.477484093 +0000 UTC m=+122.193277399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.002865 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" podStartSLOduration=99.002845916 podStartE2EDuration="1m39.002845916s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:35.999973224 +0000 UTC m=+121.715766350" watchObservedRunningTime="2025-12-08 17:42:36.002845916 +0000 UTC m=+121.718639032" Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.006758 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.011546 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.044121 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-g2cx2"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.078458 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.078740 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.578703539 +0000 UTC m=+122.294496655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.180397 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.180871 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.680852028 +0000 UTC m=+122.396645144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.184771 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.201069 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-dd4zh"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.205005 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.207492 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.210183 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.214501 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.217491 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-48lzh"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.220360 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.232315 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.239607 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.252460 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2vwv"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.255327 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bhw9j"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.259937 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.261835 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-nfj76"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.276594 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-gwjhb"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.279677 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.282105 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.282722 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.782698849 +0000 UTC m=+122.498491965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.286369 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.291715 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.291792 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.292621 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m"] Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.384512 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.385111 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.885085344 +0000 UTC m=+122.600878460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.486656 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.486889 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.986844494 +0000 UTC m=+122.702637610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.487086 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.487569 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:36.987545491 +0000 UTC m=+122.703338797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.589181 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.589355 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.089328931 +0000 UTC m=+122.805122047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.589494 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.589912 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.089903895 +0000 UTC m=+122.805697011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.691385 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.691664 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.191615804 +0000 UTC m=+122.907408930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.692213 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.692639 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.192616439 +0000 UTC m=+122.908409735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.793258 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.793395 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.293360003 +0000 UTC m=+123.009153119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.793695 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.794165 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.294153592 +0000 UTC m=+123.009946708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.895462 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.39543002 +0000 UTC m=+123.111223146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.895307 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.895945 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.896415 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.396405334 +0000 UTC m=+123.112198450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:36 crc kubenswrapper[5113]: I1208 17:42:36.997617 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:36 crc kubenswrapper[5113]: E1208 17:42:36.998088 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.498070031 +0000 UTC m=+123.213863147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.001891 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" event={"ID":"6968f785-35be-457b-b97d-99098172ebdd","Type":"ContainerStarted","Data":"a8f0eed11f8e8c71f0ca5527983c44866f34706abf2b1e0c2faf4926c771cc73"} Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.001960 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" event={"ID":"1b027df0-f583-455e-a52b-68b4431d5394","Type":"ContainerStarted","Data":"3eac3a8e5305de00e324035310d841610cfc95cfc27b81d8d916ee3f0c444649"} Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.001974 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-g2cx2" event={"ID":"db5f5b00-b0bf-4fd2-9078-80554270a1b3","Type":"ContainerStarted","Data":"a8e7e26ab0265bc1d67a1b73c6dcb709fd6364561a1e334438970e014541dbaa"} Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.001989 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" event={"ID":"7bee79ad-69c2-45b0-bc04-e92af1900a27","Type":"ContainerStarted","Data":"643c5b620542c5ab4602b67c6afb678285cc7d73544f6063dd7e6c28a3dbd418"} Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.008563 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" event={"ID":"f7d67d90-c6bc-475e-891e-a90471f44e71","Type":"ContainerStarted","Data":"458f3b36b971379479ec38c7cd9c4f3056c605cbfb9eb840cad37f7572a6c1d0"} Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.014335 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" event={"ID":"b1d1632d-d9ab-4079-b57e-91366b0c2fde","Type":"ContainerStarted","Data":"35d52815d9b450c5131ba968fbc0ce4fb07f2b7581950ffbfc88b450994a4a11"} Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.023779 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" event={"ID":"06330ab4-fda1-473e-a461-4091dd3b78e8","Type":"ContainerStarted","Data":"6138f7e0c840e2433ce625853c1e8ce80ec60dd3df716037a62f948b6929a0c4"} Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.099338 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.099876 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.599859241 +0000 UTC m=+123.315652357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.200740 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.200863 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.700836341 +0000 UTC m=+123.416629457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.201552 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.202024 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.70201204 +0000 UTC m=+123.417805156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.302576 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.302882 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.802865047 +0000 UTC m=+123.518658163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.404747 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.405571 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:37.905552908 +0000 UTC m=+123.621346024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.507660 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.509677 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.009644786 +0000 UTC m=+123.725437902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.611192 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.611701 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.111678702 +0000 UTC m=+123.827471818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.712878 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.712994 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.21297283 +0000 UTC m=+123.928765946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.713187 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.713559 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.213545694 +0000 UTC m=+123.929338820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.732294 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32822: no serving certificate available for the kubelet" Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.771471 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32834: no serving certificate available for the kubelet" Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.808515 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32836: no serving certificate available for the kubelet" Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.814495 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.814717 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.314682528 +0000 UTC m=+124.030475654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.815210 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.815667 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.315635302 +0000 UTC m=+124.031428418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.915346 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32846: no serving certificate available for the kubelet" Dec 08 17:42:37 crc kubenswrapper[5113]: I1208 17:42:37.917494 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:37 crc kubenswrapper[5113]: E1208 17:42:37.918067 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.418028477 +0000 UTC m=+124.133821593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.010798 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32858: no serving certificate available for the kubelet" Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.018914 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.019328 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.519313434 +0000 UTC m=+124.235106550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.032071 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gwjhb" event={"ID":"58ec59b3-b3df-4362-8a55-195c1ac13192","Type":"ContainerStarted","Data":"46d9ec062c5b997b45a387c20c1880efd7bad82256b28ebeeccc52e862e25d37"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.033954 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" event={"ID":"ffa3574d-c847-4258-b8f3-7a044a52f07b","Type":"ContainerStarted","Data":"84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.035409 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" event={"ID":"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152","Type":"ContainerStarted","Data":"c8e518fe04c6684fc2cfb3ca5924d9695810239a2fc498b61e78215eb9f5f3f9"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.036897 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" event={"ID":"9d8220da-8458-40d0-b093-c1a70b200985","Type":"ContainerStarted","Data":"a172d72f1f2434201155dd7b49ae9cde1d73678918030d7485c0a2c55bbc382f"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.039647 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" event={"ID":"da8d1cb5-ad1f-48b7-8208-6b840f893cd5","Type":"ContainerStarted","Data":"21728885d075075ed845bf57415bb123085473ea598349982723754ec342dd59"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.040944 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2vwv" event={"ID":"7a5121ce-5d23-4bc7-925b-645160d834f3","Type":"ContainerStarted","Data":"6e0946554fd70c247fd5a2d9c677f77fe799772d6dacb9ed0e44195dcb0a0bbe"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.042430 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" event={"ID":"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58","Type":"ContainerStarted","Data":"1bf799d30c04723166dd84c90d9e17cdb54f877f6e776c1356cc07737b6a9ef7"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.043710 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" event={"ID":"17535922-286a-4eba-a833-f8feeb9af226","Type":"ContainerStarted","Data":"f2b6550ee849a1110dead29db6083330853f4552fe4768f13a02b1f16ea63850"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.044732 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" event={"ID":"fe571182-64c8-4e51-9d95-5777eafe1746","Type":"ContainerStarted","Data":"525c018cacb76e8a86f878ef7da8719b791e76a16bd73f73c6f9d6869a8403ee"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.045711 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" event={"ID":"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f","Type":"ContainerStarted","Data":"f9bbb83789e26a04f807581a3074c98da81d7aec52c89a12fc99e2a2d3967960"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.046727 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" event={"ID":"c46cf580-9081-4eac-aee1-1dcd5d7df322","Type":"ContainerStarted","Data":"9c2b5b36c0d3edbce4a549d9263fd63fe6c1ac262987cba37a007628205cf471"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.047796 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" event={"ID":"9f20a9b3-632d-44ab-8721-6c512ea15262","Type":"ContainerStarted","Data":"2509dc085a40006a91d302c493d9e5096271d0de6e4b595f5ae3ac5839dfd242"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.049439 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" event={"ID":"fb9642d5-438c-4cdb-ab4a-75a72e236fee","Type":"ContainerStarted","Data":"9c4ad5cd91b86560aa99d097bfc47a81edc03b62035c7e97db8df8b4eb07249c"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.050426 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" event={"ID":"5578ddc6-8840-4d84-abce-93bc621d7aac","Type":"ContainerStarted","Data":"b3ac675ae91b98e54e9b5018f9be0fb26f6ef777f2fcadefe4ecb8cf41483ef4"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.051540 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" event={"ID":"067b1191-de46-48dc-9922-80c85738d142","Type":"ContainerStarted","Data":"cb6a1772d5f2bfa6c2722865f755c3ba1bd88c67df22a76fef72cad599c323d8"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.052987 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" event={"ID":"be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb","Type":"ContainerStarted","Data":"0e402d3b639ba9e8965aa94f0ff0ac8bff8492d0b7be49a0631519cc1b3b405d"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.054323 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-62sbs" event={"ID":"611ac9b7-f05d-4755-bfba-3f54b1cbb7af","Type":"ContainerStarted","Data":"8f0ca3feacb43acd8097a6b1d6680c78f46c3c605e9ef0356a3bf3dd42684054"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.056237 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" event={"ID":"95f7bc1b-b7d2-4096-aa07-fb1ba86b1ada","Type":"ContainerStarted","Data":"518bb240243ff1ffaf3068c9cb2fb88375b75d3889090664e0b6eef3b9fb45e6"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.057086 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" event={"ID":"1855228c-8af6-4c85-afc8-513b36262cf6","Type":"ContainerStarted","Data":"eb3b184c4e898ea73e9a8fb37a15161524b16e215bf4d3a5e86664909ddd707f"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.057849 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" event={"ID":"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44","Type":"ContainerStarted","Data":"c4139cb35e5f0dc34ae762ee8fe0324fbe30c5440ddfd55060d422d1c7e70471"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.058773 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" event={"ID":"e125c503-0c52-41c1-be81-e423204e8348","Type":"ContainerStarted","Data":"f97c7a290cbd775375e7ca11fb1d5e5ba718e713afdf34b8524038d4da588bf1"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.059743 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" event={"ID":"73f7620c-2bcd-4694-abf5-f2b84cefb86b","Type":"ContainerStarted","Data":"84b42c940d9cb18f5dd5ade46bb6215385252b940b9c6cbd7a957b93c769ba8b"} Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.114501 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32870: no serving certificate available for the kubelet" Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.120339 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.120692 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.620675384 +0000 UTC m=+124.336468490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.223319 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.224043 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.724011122 +0000 UTC m=+124.439804238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.351702 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.352227 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.852204121 +0000 UTC m=+124.567997227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.375017 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32880: no serving certificate available for the kubelet" Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.382453 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.382628 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.384862 5113 patch_prober.go:28] interesting pod/apiserver-8596bd845d-n767g container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.384952 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" podUID="9a9cf80c-c14e-4d96-9887-55bdabc78cec" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.453913 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.454499 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:38.954475423 +0000 UTC m=+124.670268539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.556017 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.556218 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.056182241 +0000 UTC m=+124.771975357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.556489 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.556906 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.056892049 +0000 UTC m=+124.772685165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.658229 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.658880 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.158843813 +0000 UTC m=+124.874636929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.716169 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32884: no serving certificate available for the kubelet" Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.759733 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.760238 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.260219093 +0000 UTC m=+124.976012209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.861654 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.862165 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.362132996 +0000 UTC m=+125.077926112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.862454 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.863149 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.363117311 +0000 UTC m=+125.078910597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.963780 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.964028 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.463998878 +0000 UTC m=+125.179792014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:38 crc kubenswrapper[5113]: I1208 17:42:38.964547 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:38 crc kubenswrapper[5113]: E1208 17:42:38.964921 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.464910201 +0000 UTC m=+125.180703327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.027347 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.027431 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.065200 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.065385 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.565356587 +0000 UTC m=+125.281149703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.066122 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.066506 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.566492786 +0000 UTC m=+125.282285952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.168167 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.168589 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.668560133 +0000 UTC m=+125.384353249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.170596 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.170711 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.173412 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-n9p2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.173509 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-n9p2l" podUID="8cf4b24b-8b34-4e71-b8e8-31fb36974b9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.273225 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.273734 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.773711797 +0000 UTC m=+125.489504913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.375579 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.375859 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.875813245 +0000 UTC m=+125.591606361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.377188 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.378139 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.878107172 +0000 UTC m=+125.593900488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.429968 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32886: no serving certificate available for the kubelet" Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.478782 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.479086 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.978994769 +0000 UTC m=+125.694787885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.479279 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.479894 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:39.979867161 +0000 UTC m=+125.695660277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.512069 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.512689 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.512780 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.581013 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.581223 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.081186729 +0000 UTC m=+125.796979845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.581515 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.581877 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.081863966 +0000 UTC m=+125.797657082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.683103 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.683640 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.183618426 +0000 UTC m=+125.899411552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.785173 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.785806 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.285781405 +0000 UTC m=+126.001574541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.887068 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.887691 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.387663157 +0000 UTC m=+126.103456283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:39 crc kubenswrapper[5113]: I1208 17:42:39.988934 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:39 crc kubenswrapper[5113]: E1208 17:42:39.989398 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.489382636 +0000 UTC m=+126.205175752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.090957 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.091414 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.591390781 +0000 UTC m=+126.307183897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.192765 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.193200 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.693182991 +0000 UTC m=+126.408976097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.294308 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.295006 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.794975712 +0000 UTC m=+126.510768818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.317843 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.405109 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.406075 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:40.906056693 +0000 UTC m=+126.621849809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.416522 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podStartSLOduration=103.416499044 podStartE2EDuration="1m43.416499044s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:39.322688149 +0000 UTC m=+125.038481255" watchObservedRunningTime="2025-12-08 17:42:40.416499044 +0000 UTC m=+126.132292160" Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.418289 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-62sbs" podStartSLOduration=13.418282088 podStartE2EDuration="13.418282088s" podCreationTimestamp="2025-12-08 17:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:40.418102054 +0000 UTC m=+126.133895170" watchObservedRunningTime="2025-12-08 17:42:40.418282088 +0000 UTC m=+126.134075204" Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.506676 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.506964 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.006926411 +0000 UTC m=+126.722719527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.507304 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.508020 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.007962416 +0000 UTC m=+126.723755532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.522608 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:40 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:40 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:40 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.523173 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.609361 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.609625 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.109578052 +0000 UTC m=+126.825371168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.610083 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.610538 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.110520536 +0000 UTC m=+126.826313642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.714085 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.714239 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.214205053 +0000 UTC m=+126.929998169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.714769 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.715272 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.215253819 +0000 UTC m=+126.931046935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.788023 5113 ???:1] "http: TLS handshake error from 192.168.126.11:32900: no serving certificate available for the kubelet" Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.885779 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.886329 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.386283947 +0000 UTC m=+127.102077063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:40 crc kubenswrapper[5113]: I1208 17:42:40.990674 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:40 crc kubenswrapper[5113]: E1208 17:42:40.994753 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.494728142 +0000 UTC m=+127.210521258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.092121 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.092468 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.592444761 +0000 UTC m=+127.308237877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.104906 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-g2cx2" event={"ID":"db5f5b00-b0bf-4fd2-9078-80554270a1b3","Type":"ContainerStarted","Data":"55761b8ee7b0d403a8862d973f79d29076d49df82a25187a9b2ff54cba19bdf8"} Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.107604 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" event={"ID":"7bee79ad-69c2-45b0-bc04-e92af1900a27","Type":"ContainerStarted","Data":"09aef961a7278429b38e5d07e1f4d704fc83dd8c02e4e9d3956e25d662d603ff"} Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.114094 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" event={"ID":"6968f785-35be-457b-b97d-99098172ebdd","Type":"ContainerStarted","Data":"e3d0bf189c8c327ed4314181af5823f201472a02229423a90802a7d91ca3b2c7"} Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.117242 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" event={"ID":"1b027df0-f583-455e-a52b-68b4431d5394","Type":"ContainerStarted","Data":"94626a56f1e3d075687f92858aa754e67f23eedfdffe15e0e128f412d7b96b9b"} Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.142366 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.181484 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.183971 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-jw54r" podStartSLOduration=104.183949704 podStartE2EDuration="1m44.183949704s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.182282803 +0000 UTC m=+126.898075919" watchObservedRunningTime="2025-12-08 17:42:41.183949704 +0000 UTC m=+126.899742820" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.184166 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-g2cx2" podStartSLOduration=104.184132409 podStartE2EDuration="1m44.184132409s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.160618092 +0000 UTC m=+126.876411208" watchObservedRunningTime="2025-12-08 17:42:41.184132409 +0000 UTC m=+126.899925525" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.193877 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.199015 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.69898649 +0000 UTC m=+127.414779806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.212332 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" podStartSLOduration=104.212298772 podStartE2EDuration="1m44.212298772s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.209931503 +0000 UTC m=+126.925724619" watchObservedRunningTime="2025-12-08 17:42:41.212298772 +0000 UTC m=+126.928091888" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.237971 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-rmn54" podStartSLOduration=104.237946452 podStartE2EDuration="1m44.237946452s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.236478515 +0000 UTC m=+126.952271621" watchObservedRunningTime="2025-12-08 17:42:41.237946452 +0000 UTC m=+126.953739578" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.247536 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.273605 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" podStartSLOduration=13.273577711 podStartE2EDuration="13.273577711s" podCreationTimestamp="2025-12-08 17:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:41.257312685 +0000 UTC m=+126.973105801" watchObservedRunningTime="2025-12-08 17:42:41.273577711 +0000 UTC m=+126.989370837" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.298013 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.298448 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.798430091 +0000 UTC m=+127.514223207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.400530 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.401083 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:41.901068772 +0000 UTC m=+127.616861888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.521918 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.522638 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.022610573 +0000 UTC m=+127.738403689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.540968 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.624511 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.624975 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.124955509 +0000 UTC m=+127.840748625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.687747 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:41 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:41 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:41 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.687848 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.715675 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-jmmc5" Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.727004 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.727607 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.22757258 +0000 UTC m=+127.943365696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.832072 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.832484 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.332468098 +0000 UTC m=+128.048261214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.833959 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5hq9p"] Dec 08 17:42:41 crc kubenswrapper[5113]: I1208 17:42:41.933681 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:41 crc kubenswrapper[5113]: E1208 17:42:41.934718 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.434691679 +0000 UTC m=+128.150484795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.036838 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.037276 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.537258888 +0000 UTC m=+128.253051994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.187428 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.188181 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.688148804 +0000 UTC m=+128.403941920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.308379 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.308911 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.808884116 +0000 UTC m=+128.524677232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.410062 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.410914 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:42.910872111 +0000 UTC m=+128.626665227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.511812 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.513791 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.013773889 +0000 UTC m=+128.729567005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.533768 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" event={"ID":"06330ab4-fda1-473e-a461-4091dd3b78e8","Type":"ContainerStarted","Data":"783ac3c3b6f4afd02c784cead53b68f047c9e1b0e804fab707a93dcb3ab34451"} Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.535156 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.536273 5113 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-n5msr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.536316 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" podUID="06330ab4-fda1-473e-a461-4091dd3b78e8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.617505 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.617881 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.117864447 +0000 UTC m=+128.833657553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.618098 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" event={"ID":"73f7620c-2bcd-4694-abf5-f2b84cefb86b","Type":"ContainerStarted","Data":"28bc8e16c0bdee5c875eecd2dcfcf6ce1b960f7ebff7ea4f735e7b34388abffe"} Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.620004 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.620109 5113 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xbdkt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.620145 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" podUID="73f7620c-2bcd-4694-abf5-f2b84cefb86b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.632644 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" event={"ID":"b1d1632d-d9ab-4079-b57e-91366b0c2fde","Type":"ContainerStarted","Data":"bbb3791bfbd1427be08f953a7291791205cf1775c3679568a383bf4763ea2dbd"} Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.772721 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" podStartSLOduration=105.77268732 podStartE2EDuration="1m45.77268732s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.768585058 +0000 UTC m=+128.484378204" watchObservedRunningTime="2025-12-08 17:42:42.77268732 +0000 UTC m=+128.488480446" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.774672 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.775059 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.275023799 +0000 UTC m=+128.990816915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.778907 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:42 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:42 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:42 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.779086 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.798590 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" event={"ID":"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58","Type":"ContainerStarted","Data":"f96f50b2eabb96fffb4b27bbef5878df898b6183b92a204dee94ca3763e911ca"} Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.812529 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" event={"ID":"fe571182-64c8-4e51-9d95-5777eafe1746","Type":"ContainerStarted","Data":"bc9bb9385ad5205041e01e47cfb5d328b88cd495cb833ecbeb6cc8b1752c1581"} Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.812634 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.823477 5113 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-vmlfn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.823580 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" podUID="fe571182-64c8-4e51-9d95-5777eafe1746" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.825575 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" podStartSLOduration=105.825535719 podStartE2EDuration="1m45.825535719s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.81715045 +0000 UTC m=+128.532943566" watchObservedRunningTime="2025-12-08 17:42:42.825535719 +0000 UTC m=+128.541328825" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.871355 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" podStartSLOduration=105.871317891 podStartE2EDuration="1m45.871317891s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:42.868523542 +0000 UTC m=+128.584316658" watchObservedRunningTime="2025-12-08 17:42:42.871317891 +0000 UTC m=+128.587111007" Dec 08 17:42:42 crc kubenswrapper[5113]: I1208 17:42:42.884143 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:42 crc kubenswrapper[5113]: E1208 17:42:42.884924 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.384906531 +0000 UTC m=+129.100699637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.017006 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.017484 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.517467458 +0000 UTC m=+129.233260574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.118663 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.118882 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.618839708 +0000 UTC m=+129.334632824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.135115 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" podStartSLOduration=106.135082423 podStartE2EDuration="1m46.135082423s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:43.015454428 +0000 UTC m=+128.731247544" watchObservedRunningTime="2025-12-08 17:42:43.135082423 +0000 UTC m=+128.850875539" Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.292666 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.294309 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.794286426 +0000 UTC m=+129.510079542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.394393 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.394816 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.395440 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:43.89541304 +0000 UTC m=+129.611206166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.395743 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55906: no serving certificate available for the kubelet" Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.422342 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6mtlt" podStartSLOduration=106.422307471 podStartE2EDuration="1m46.422307471s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:43.137542815 +0000 UTC m=+128.853335931" watchObservedRunningTime="2025-12-08 17:42:43.422307471 +0000 UTC m=+129.138100587" Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.639496 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.639933 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.139909481 +0000 UTC m=+129.855702587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.641065 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:43 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:43 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:43 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.641150 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.741196 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.741451 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.241421204 +0000 UTC m=+129.957214320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.741621 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.742238 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.242229124 +0000 UTC m=+129.958022240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.871625 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.872091 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.372067054 +0000 UTC m=+130.087860170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.926981 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" event={"ID":"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152","Type":"ContainerStarted","Data":"a8f6c80b6b23ddd72cc334e1d441af302dbdc73e697ac7b536dd4db76020b550"} Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.930231 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" event={"ID":"c46cf580-9081-4eac-aee1-1dcd5d7df322","Type":"ContainerStarted","Data":"67718f4ac95e1f2c5e512760e576b68511d5bb35af99bde73af38bda7fafb824"} Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.948511 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.962271 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:42:43 crc kubenswrapper[5113]: I1208 17:42:43.974085 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:43 crc kubenswrapper[5113]: E1208 17:42:43.974676 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.474651334 +0000 UTC m=+130.190444450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.057322 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.062445 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bhw9j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.062532 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.077055 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5113]: E1208 17:42:44.078763 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.578742762 +0000 UTC m=+130.294535878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.093420 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-qrdwk" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.098095 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-n767g" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.108374 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-d2l67" event={"ID":"f7d67d90-c6bc-475e-891e-a90471f44e71","Type":"ContainerStarted","Data":"b6ee5a5cdaf1f3586a4be2eedc4eb1bab67bc8630eaeda0669716d482be01ec1"} Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.133890 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" event={"ID":"067b1191-de46-48dc-9922-80c85738d142","Type":"ContainerStarted","Data":"39023b8c0cb953712b7adda9686843a90bdb871c22db92e8562480917f729ca1"} Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.134107 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" gracePeriod=30 Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.135195 5113 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-vmlfn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.135255 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" podUID="fe571182-64c8-4e51-9d95-5777eafe1746" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.136160 5113 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-n5msr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.136198 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" podUID="06330ab4-fda1-473e-a461-4091dd3b78e8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.136254 5113 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xbdkt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.136273 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" podUID="73f7620c-2bcd-4694-abf5-f2b84cefb86b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.206414 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:44 crc kubenswrapper[5113]: E1208 17:42:44.208008 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.707993067 +0000 UTC m=+130.423786183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.377026 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5113]: E1208 17:42:44.379964 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.879936068 +0000 UTC m=+130.595729194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.480441 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:44 crc kubenswrapper[5113]: E1208 17:42:44.480813 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:44.980800895 +0000 UTC m=+130.696594001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.524401 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" podStartSLOduration=107.524378021 podStartE2EDuration="1m47.524378021s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:44.435285449 +0000 UTC m=+130.151078575" watchObservedRunningTime="2025-12-08 17:42:44.524378021 +0000 UTC m=+130.240171157" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.525285 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" podStartSLOduration=107.525273564 podStartE2EDuration="1m47.525273564s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:44.523724735 +0000 UTC m=+130.239517871" watchObservedRunningTime="2025-12-08 17:42:44.525273564 +0000 UTC m=+130.241066680" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.535700 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:44 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:44 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:44 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.535782 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.583884 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5113]: E1208 17:42:44.585053 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.085013064 +0000 UTC m=+130.800806180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.585310 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:44 crc kubenswrapper[5113]: E1208 17:42:44.586210 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.086199824 +0000 UTC m=+130.801992940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.976341 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:44 crc kubenswrapper[5113]: E1208 17:42:44.976753 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.47673501 +0000 UTC m=+131.192528126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.993653 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:44 crc kubenswrapper[5113]: I1208 17:42:44.993726 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.094763 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.095152 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.595134624 +0000 UTC m=+131.310927740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.197025 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.198648 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.698596116 +0000 UTC m=+131.414389232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.217148 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" event={"ID":"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44","Type":"ContainerStarted","Data":"fb2424bd26972aeca9260f4bc83310b7a9dc629168b7088deff24e6c681da332"} Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.223578 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-gwjhb" event={"ID":"58ec59b3-b3df-4362-8a55-195c1ac13192","Type":"ContainerStarted","Data":"7844ad4dd5b825ca62c0fed5b5efef18a3d0531165fb407a36aa008ddfad2645"} Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.258763 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2vwv" event={"ID":"7a5121ce-5d23-4bc7-925b-645160d834f3","Type":"ContainerStarted","Data":"302de614f53cf9d63200a09dff3fb998cf9d84445e92443de44f9c7823aacb84"} Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.261396 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" event={"ID":"9f20a9b3-632d-44ab-8721-6c512ea15262","Type":"ContainerStarted","Data":"a91399d7733d3a191e0517884ec357929241d6390170e3baf650ca634a3cbcab"} Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.263543 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" event={"ID":"5578ddc6-8840-4d84-abce-93bc621d7aac","Type":"ContainerStarted","Data":"baf6462a83503d6232dd31846c827f2c7cb81cf14b1d3eee6a94b661292513b7"} Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.296760 5113 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xbdkt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.296806 5113 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-n5msr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.296857 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" podUID="73f7620c-2bcd-4694-abf5-f2b84cefb86b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.296906 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" podUID="06330ab4-fda1-473e-a461-4091dd3b78e8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.296953 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bhw9j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.296971 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.300214 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.303355 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.80334153 +0000 UTC m=+131.519134646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.433724 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.433926 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.933896308 +0000 UTC m=+131.649689424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.434527 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.434938 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:45.934928383 +0000 UTC m=+131.650721499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.523408 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:45 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:45 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:45 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.523501 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.535997 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.536717 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.036692113 +0000 UTC m=+131.752485229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.680772 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.681402 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.181379363 +0000 UTC m=+131.897172479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.784355 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.784873 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.284836255 +0000 UTC m=+132.000629371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.890554 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.891775 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.391753343 +0000 UTC m=+132.107546459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.984619 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:42:45 crc kubenswrapper[5113]: I1208 17:42:45.993742 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:45 crc kubenswrapper[5113]: E1208 17:42:45.994468 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.494436665 +0000 UTC m=+132.210229781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.096961 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.098266 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.598234976 +0000 UTC m=+132.314028252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.198341 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.198573 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.698508868 +0000 UTC m=+132.414302124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.199257 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.199728 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.699703918 +0000 UTC m=+132.415497034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.300172 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.300367 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.800322788 +0000 UTC m=+132.516115924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.301115 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.301603 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.80158728 +0000 UTC m=+132.517380406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.311022 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" event={"ID":"9d8220da-8458-40d0-b093-c1a70b200985","Type":"ContainerStarted","Data":"63134795e48535618150b9a66dd302254cabc8de2a6ef31aa1de2b7db2ca382b"} Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.312372 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bhw9j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.312430 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.402773 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.402970 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.902934729 +0000 UTC m=+132.618727965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.403607 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.406139 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:46.906126539 +0000 UTC m=+132.621919655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.468712 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-gwjhb" podStartSLOduration=18.46869128 podStartE2EDuration="18.46869128s" podCreationTimestamp="2025-12-08 17:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:46.444005224 +0000 UTC m=+132.159798340" watchObservedRunningTime="2025-12-08 17:42:46.46869128 +0000 UTC m=+132.184484416" Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.470848 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-5l9d7" podStartSLOduration=109.470834243 podStartE2EDuration="1m49.470834243s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:46.468012883 +0000 UTC m=+132.183806019" watchObservedRunningTime="2025-12-08 17:42:46.470834243 +0000 UTC m=+132.186627359" Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.505509 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.505911 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.005876158 +0000 UTC m=+132.721669274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.520661 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:46 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:46 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:46 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.520764 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.608107 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.608714 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.108686323 +0000 UTC m=+132.824479439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.713340 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.713999 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.213975411 +0000 UTC m=+132.929768527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.817265 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.818050 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.318012797 +0000 UTC m=+133.033805913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:46 crc kubenswrapper[5113]: I1208 17:42:46.928300 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:46 crc kubenswrapper[5113]: E1208 17:42:46.928792 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.428774901 +0000 UTC m=+133.144568017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.030868 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.031310 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.531297369 +0000 UTC m=+133.247090485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.132174 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.133451 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.633418968 +0000 UTC m=+133.349212094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.247584 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.248204 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.748177771 +0000 UTC m=+133.463970907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.375525 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.375927 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:47.875906249 +0000 UTC m=+133.591699365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.435000 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" event={"ID":"7bee79ad-69c2-45b0-bc04-e92af1900a27","Type":"ContainerStarted","Data":"a9a7c69c6794852e8ea1a86014d89286f8b15469f5f20306d18143840dbb235e"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.453337 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" event={"ID":"b1d1632d-d9ab-4079-b57e-91366b0c2fde","Type":"ContainerStarted","Data":"efda317ec08068742962c5a9385315ed83df2d389e2bd6072571928bb1a42618"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.592545 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.593079 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.093065798 +0000 UTC m=+133.808858914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.601651 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.607006 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.607324 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:47 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:47 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:47 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.607715 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.613588 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" event={"ID":"da8d1cb5-ad1f-48b7-8208-6b840f893cd5","Type":"ContainerStarted","Data":"49fc0c784c45a1d429ee323cbeba0d25260211f3096c7b440663863f4d3efa98"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.617955 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2vwv" event={"ID":"7a5121ce-5d23-4bc7-925b-645160d834f3","Type":"ContainerStarted","Data":"bd73b38cc542b00338a8db13d823f58226690829489fe05b38ac91c5d14ca71a"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.636484 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" event={"ID":"eadd88fe-cbfc-41e6-afd3-c7bd4f9eec58","Type":"ContainerStarted","Data":"5f75eb25140ed4b601033e776703f075e21de982e0a87679f0f1dc43928fca75"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.647082 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" event={"ID":"f56553d9-63c5-47e4-baf9-9b3cfdf8c75f","Type":"ContainerStarted","Data":"04fec34e4cd7c81f2498d9ac2c889d5c9a957fb531bceffcc5f626840f8dbaa1"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.665759 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" event={"ID":"5578ddc6-8840-4d84-abce-93bc621d7aac","Type":"ContainerStarted","Data":"10ba5c8d73901c5ddb87d765922ff2fb114b830026075a8cd9a44a395ad6d231"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.672332 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" event={"ID":"067b1191-de46-48dc-9922-80c85738d142","Type":"ContainerStarted","Data":"2acf3cd77b91fc75502339eb97a6b950ec1f7d666286f67af1400d83a2e6d097"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.674182 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" event={"ID":"1855228c-8af6-4c85-afc8-513b36262cf6","Type":"ContainerStarted","Data":"81db9977f89382a2e71b846ef219d77e8ac7d066781d324a752994c58df30e02"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.679529 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" event={"ID":"e125c503-0c52-41c1-be81-e423204e8348","Type":"ContainerStarted","Data":"440275e893d63f1da014b8dec3c077406db9b973818cfb739fd74c0eedcf5694"} Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.693691 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.694083 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.194066688 +0000 UTC m=+133.909859804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.873414 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.873929 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.373908936 +0000 UTC m=+134.089702052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.979156 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.979399 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.479374428 +0000 UTC m=+134.195167544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:47 crc kubenswrapper[5113]: I1208 17:42:47.979736 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:47 crc kubenswrapper[5113]: E1208 17:42:47.980281 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.4802655 +0000 UTC m=+134.196058616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.122739 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.122890 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.622865988 +0000 UTC m=+134.338659104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.124255 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.125355 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.625340869 +0000 UTC m=+134.341133985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.241995 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.242497 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.742468072 +0000 UTC m=+134.458261178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.268735 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-bs7z2" podStartSLOduration=111.268714487 podStartE2EDuration="1m51.268714487s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:48.150076627 +0000 UTC m=+133.865869743" watchObservedRunningTime="2025-12-08 17:42:48.268714487 +0000 UTC m=+133.984507603" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.381829 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.382276 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:48.8822569 +0000 UTC m=+134.598050016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.594869 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.595162 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.095144763 +0000 UTC m=+134.810937869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.631407 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:48 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:48 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:48 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.631509 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.698483 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.698965 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.198945703 +0000 UTC m=+134.914738819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.701547 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.704352 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.704980 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.719506 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-dd4zh" podStartSLOduration=111.719476285 podStartE2EDuration="1m51.719476285s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:48.270680996 +0000 UTC m=+133.986474112" watchObservedRunningTime="2025-12-08 17:42:48.719476285 +0000 UTC m=+134.435269411" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.720843 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-9rcgg" podStartSLOduration=111.720833349 podStartE2EDuration="1m51.720833349s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:48.71684949 +0000 UTC m=+134.432642626" watchObservedRunningTime="2025-12-08 17:42:48.720833349 +0000 UTC m=+134.436626465" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.781645 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55912: no serving certificate available for the kubelet" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.799824 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.803533 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.303476692 +0000 UTC m=+135.019269808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.847813 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-6f6vr" podStartSLOduration=111.847779577 podStartE2EDuration="1m51.847779577s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:48.846839064 +0000 UTC m=+134.562632200" watchObservedRunningTime="2025-12-08 17:42:48.847779577 +0000 UTC m=+134.563572693" Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.903244 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:48 crc kubenswrapper[5113]: E1208 17:42:48.903641 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.403625091 +0000 UTC m=+135.119418207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:48 crc kubenswrapper[5113]: I1208 17:42:48.906866 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x2vwv" podStartSLOduration=21.906827031 podStartE2EDuration="21.906827031s" podCreationTimestamp="2025-12-08 17:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:48.903881837 +0000 UTC m=+134.619674953" watchObservedRunningTime="2025-12-08 17:42:48.906827031 +0000 UTC m=+134.622620147" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.023657 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.023928 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.523893261 +0000 UTC m=+135.239686377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.024187 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.024655 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.52463497 +0000 UTC m=+135.240428096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.027896 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.028020 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.125427 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.126297 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.626264842 +0000 UTC m=+135.342057958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.212161 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-5w66m" podStartSLOduration=112.21211961 podStartE2EDuration="1m52.21211961s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:49.011103153 +0000 UTC m=+134.726896289" watchObservedRunningTime="2025-12-08 17:42:49.21211961 +0000 UTC m=+134.927912716" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.214129 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=56.214121721 podStartE2EDuration="56.214121721s" podCreationTimestamp="2025-12-08 17:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:49.211472223 +0000 UTC m=+134.927265339" watchObservedRunningTime="2025-12-08 17:42:49.214121721 +0000 UTC m=+134.929914837" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.229199 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.229908 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.729887685 +0000 UTC m=+135.445680801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.363403 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.363945 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.863917157 +0000 UTC m=+135.579710273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.370660 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-n9p2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.370754 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-n9p2l" podUID="8cf4b24b-8b34-4e71-b8e8-31fb36974b9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.465203 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-8dmhx" podStartSLOduration=112.46518286 podStartE2EDuration="1m52.46518286s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:49.343590907 +0000 UTC m=+135.059384023" watchObservedRunningTime="2025-12-08 17:42:49.46518286 +0000 UTC m=+135.180975976" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.475849 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.476335 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:49.976321465 +0000 UTC m=+135.692114581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.577029 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.577921 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.077902336 +0000 UTC m=+135.793695452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.591881 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:49 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:49 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:49 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.591997 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.635481 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" podStartSLOduration=112.63546283 podStartE2EDuration="1m52.63546283s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:49.46984978 +0000 UTC m=+135.185642906" watchObservedRunningTime="2025-12-08 17:42:49.63546283 +0000 UTC m=+135.351255936" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.679990 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.680680 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.180657478 +0000 UTC m=+135.896450594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.741455 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" event={"ID":"9d8220da-8458-40d0-b093-c1a70b200985","Type":"ContainerStarted","Data":"aee9a93eebd46a60f331d45f97b077720213aa83f020fd63cd1d68686732c5cb"} Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.744218 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" event={"ID":"b5bc0b8b-b537-4cae-8cc9-970eba4e8b44","Type":"ContainerStarted","Data":"d3945f88f57d4f0fd7cc0137f38d9a679c93378c70afd3cc7a74e39cda7f36c0"} Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.785493 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.785913 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.285890792 +0000 UTC m=+136.001683908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.850576 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-7g2c4" podStartSLOduration=112.850552248 podStartE2EDuration="1m52.850552248s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:49.636441785 +0000 UTC m=+135.352234901" watchObservedRunningTime="2025-12-08 17:42:49.850552248 +0000 UTC m=+135.566345374" Dec 08 17:42:49 crc kubenswrapper[5113]: I1208 17:42:49.887992 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:49 crc kubenswrapper[5113]: E1208 17:42:49.892002 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.391986979 +0000 UTC m=+136.107780095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.066396 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.066603 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.566569689 +0000 UTC m=+136.282362805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.066916 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.067405 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.56738182 +0000 UTC m=+136.283174936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.167905 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.168175 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.66813354 +0000 UTC m=+136.383926766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.168394 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.169227 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.669216258 +0000 UTC m=+136.385009374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.270095 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.270626 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.770601744 +0000 UTC m=+136.486394860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.273143 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" podStartSLOduration=113.273124818 podStartE2EDuration="1m53.273124818s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:50.143731175 +0000 UTC m=+135.859524291" watchObservedRunningTime="2025-12-08 17:42:50.273124818 +0000 UTC m=+135.988917944" Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.389457 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.390129 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.890100054 +0000 UTC m=+136.605893170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.404028 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-48lzh" podStartSLOduration=113.403998659 podStartE2EDuration="1m53.403998659s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:50.401766572 +0000 UTC m=+136.117559708" watchObservedRunningTime="2025-12-08 17:42:50.403998659 +0000 UTC m=+136.119791775" Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.404558 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-749v6" podStartSLOduration=113.404550994 podStartE2EDuration="1m53.404550994s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:42:50.279301826 +0000 UTC m=+135.995094952" watchObservedRunningTime="2025-12-08 17:42:50.404550994 +0000 UTC m=+136.120344130" Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.451728 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.491543 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.491777 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.991729426 +0000 UTC m=+136.707522542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.492413 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.493732 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:50.993713067 +0000 UTC m=+136.709506183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.543491 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:50 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:50 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:50 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.543627 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.644648 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.644848 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.144813556 +0000 UTC m=+136.860606672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.645293 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.645760 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.145738329 +0000 UTC m=+136.861531455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.746271 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.746546 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.246500189 +0000 UTC m=+136.962293305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.746886 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.747531 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.247500135 +0000 UTC m=+136.963293251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.848010 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.848137 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.348117601 +0000 UTC m=+137.063910717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.848387 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.848696 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.348688406 +0000 UTC m=+137.064481522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:50 crc kubenswrapper[5113]: I1208 17:42:50.963577 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:50 crc kubenswrapper[5113]: E1208 17:42:50.963996 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.463966698 +0000 UTC m=+137.179759814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.091346 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.091687 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.591674848 +0000 UTC m=+137.307467964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.196959 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.197384 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.697334734 +0000 UTC m=+137.413127850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.200851 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.234464 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.239059 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.239172 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.298413 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.298877 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.798836803 +0000 UTC m=+137.514629919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.400122 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.400697 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:51.90067327 +0000 UTC m=+137.616466386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.504297 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.504954 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.00493391 +0000 UTC m=+137.720727026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.521353 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:51 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:51 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:51 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.521468 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.605894 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.606177 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.106133201 +0000 UTC m=+137.821926317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.607258 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.607666 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.10765335 +0000 UTC m=+137.823446466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.623502 5113 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-xbdkt container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 17:42:51 crc kubenswrapper[5113]: [+]log ok Dec 08 17:42:51 crc kubenswrapper[5113]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Dec 08 17:42:51 crc kubenswrapper[5113]: [-]poststarthook/max-in-flight-filter failed: reason withheld Dec 08 17:42:51 crc kubenswrapper[5113]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Dec 08 17:42:51 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.623593 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" podUID="73f7620c-2bcd-4694-abf5-f2b84cefb86b" containerName="packageserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.718235 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.719189 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.219168295 +0000 UTC m=+137.934961411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.725777 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-bhw9j container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.725840 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.821555 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.822175 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.322154522 +0000 UTC m=+138.037947638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.840705 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" event={"ID":"17535922-286a-4eba-a833-f8feeb9af226","Type":"ContainerStarted","Data":"e6dbfaea4ecfd54ed1fc18a8c339f9dec9f2bdfa1a4eb58fdb0714f68a5f9831"} Dec 08 17:42:51 crc kubenswrapper[5113]: I1208 17:42:51.923217 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:51 crc kubenswrapper[5113]: E1208 17:42:51.923633 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.4236085 +0000 UTC m=+138.139401616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.024669 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.025113 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.525094918 +0000 UTC m=+138.240888034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.127155 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.127436 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.627401978 +0000 UTC m=+138.343195104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.128445 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.128935 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.628922837 +0000 UTC m=+138.344715953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.135245 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6x2ww"] Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.162548 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.167129 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.193453 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x2ww"] Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.232074 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.232420 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.732399356 +0000 UTC m=+138.448192482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.340800 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-catalog-content\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.340953 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzcn2\" (UniqueName: \"kubernetes.io/projected/f838eabb-c868-4308-ab80-860767b7bf4a-kube-api-access-qzcn2\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.341051 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.341230 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-utilities\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.341692 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:52.841673724 +0000 UTC m=+138.557466830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.575794 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.576134 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-catalog-content\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.576194 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzcn2\" (UniqueName: \"kubernetes.io/projected/f838eabb-c868-4308-ab80-860767b7bf4a-kube-api-access-qzcn2\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.576276 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-utilities\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.576372 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:53.076325333 +0000 UTC m=+138.792118549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.576764 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-utilities\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.577024 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-catalog-content\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.580207 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:52 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:52 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:52 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.580279 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.626517 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q5vsp"] Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.649800 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.847060 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.848799 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:53.348763689 +0000 UTC m=+139.064556815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.854800 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.880426 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzcn2\" (UniqueName: \"kubernetes.io/projected/f838eabb-c868-4308-ab80-860767b7bf4a-kube-api-access-qzcn2\") pod \"certified-operators-6x2ww\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.983515 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.983939 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-utilities\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.983998 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqz88\" (UniqueName: \"kubernetes.io/projected/d6ee077b-7234-40ba-87fc-f305ca2738e3-kube-api-access-rqz88\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:52 crc kubenswrapper[5113]: I1208 17:42:52.984095 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-catalog-content\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:52 crc kubenswrapper[5113]: E1208 17:42:52.984420 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:53.484357681 +0000 UTC m=+139.200150997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.002393 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5vsp"] Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.002475 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hssz4"] Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.024540 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hssz4"] Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.024600 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wng52"] Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.081831 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.082736 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.130081 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rqz88\" (UniqueName: \"kubernetes.io/projected/d6ee077b-7234-40ba-87fc-f305ca2738e3-kube-api-access-rqz88\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.130194 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-catalog-content\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.130331 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.130417 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-utilities\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.131190 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-utilities\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.131901 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-catalog-content\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.134669 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wng52"] Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.137553 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:53.637523573 +0000 UTC m=+139.353316689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.139926 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.198345 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.200380 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.300627 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.314517 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:53.814471664 +0000 UTC m=+139.530264780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.314734 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-catalog-content\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.314884 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-catalog-content\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.314972 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.315118 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762kl\" (UniqueName: \"kubernetes.io/projected/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-kube-api-access-762kl\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.315153 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-utilities\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.315210 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfwh\" (UniqueName: \"kubernetes.io/projected/41d40883-8b52-49f7-b408-0d99251bf9f2-kube-api-access-jvfwh\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.315249 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-utilities\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.317890 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:53.817878501 +0000 UTC m=+139.533671617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.410960 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqz88\" (UniqueName: \"kubernetes.io/projected/d6ee077b-7234-40ba-87fc-f305ca2738e3-kube-api-access-rqz88\") pod \"community-operators-q5vsp\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.422837 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.423166 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-catalog-content\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.423253 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-catalog-content\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.423343 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-762kl\" (UniqueName: \"kubernetes.io/projected/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-kube-api-access-762kl\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.423383 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-utilities\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.423463 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jvfwh\" (UniqueName: \"kubernetes.io/projected/41d40883-8b52-49f7-b408-0d99251bf9f2-kube-api-access-jvfwh\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.423521 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-utilities\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.424319 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-utilities\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.424450 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:53.924427229 +0000 UTC m=+139.640220345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.424798 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-catalog-content\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.425449 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-catalog-content\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.426857 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-utilities\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.496394 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-762kl\" (UniqueName: \"kubernetes.io/projected/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-kube-api-access-762kl\") pod \"community-operators-hssz4\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.496726 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.501020 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvfwh\" (UniqueName: \"kubernetes.io/projected/41d40883-8b52-49f7-b408-0d99251bf9f2-kube-api-access-jvfwh\") pod \"certified-operators-wng52\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.518507 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:53 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:53 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:53 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.518611 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.525368 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.526096 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.026074622 +0000 UTC m=+139.741867738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.550650 5113 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-j9s7b container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]log ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]etcd ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/max-in-flight-filter ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 17:42:53 crc kubenswrapper[5113]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 17:42:53 crc kubenswrapper[5113]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-startinformers ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 17:42:53 crc kubenswrapper[5113]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 17:42:53 crc kubenswrapper[5113]: livez check failed Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.550763 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" podUID="da8d1cb5-ad1f-48b7-8208-6b840f893cd5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.561511 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.589949 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.626747 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.626988 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb1f207-abaa-42e5-bb62-4ee571918568-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.627183 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb1f207-abaa-42e5-bb62-4ee571918568-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.627529 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.127495649 +0000 UTC m=+139.843288765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.657225 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.662241 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.662525 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.675758 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.737220 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb1f207-abaa-42e5-bb62-4ee571918568-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.737302 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.737406 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb1f207-abaa-42e5-bb62-4ee571918568-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.737515 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb1f207-abaa-42e5-bb62-4ee571918568-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.738294 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.238274376 +0000 UTC m=+139.954067492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.846743 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.848163 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.848719 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.348692933 +0000 UTC m=+140.064486059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.868009 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb1f207-abaa-42e5-bb62-4ee571918568-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.951643 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:53 crc kubenswrapper[5113]: E1208 17:42:53.952127 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.452110751 +0000 UTC m=+140.167903857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:53 crc kubenswrapper[5113]: I1208 17:42:53.995713 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.059370 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.061156 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.561135863 +0000 UTC m=+140.276928979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.204640 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.205104 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.705082969 +0000 UTC m=+140.420876085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.286704 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-vmlfn" Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.366185 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.366671 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.866648376 +0000 UTC m=+140.582441492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.366939 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.371683 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.871661644 +0000 UTC m=+140.587454790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.477881 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.478696 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:54.978671564 +0000 UTC m=+140.694464690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.493121 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m6gs"] Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.525290 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:54 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:54 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:54 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.525401 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.593465 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.593855 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.093841253 +0000 UTC m=+140.809634369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.698202 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.698503 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.198487523 +0000 UTC m=+140.914280629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.799569 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.800007 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.299989222 +0000 UTC m=+141.015782338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.897765 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m6gs"] Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.897833 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5qxws"] Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.898665 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.900430 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.900523 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-catalog-content\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.900551 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghpvl\" (UniqueName: \"kubernetes.io/projected/6c70de64-72e0-4f9a-a819-2c1a683e43b7-kube-api-access-ghpvl\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.900644 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-utilities\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:54 crc kubenswrapper[5113]: E1208 17:42:54.901071 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.401054749 +0000 UTC m=+141.116847865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.903916 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.937352 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:54 crc kubenswrapper[5113]: I1208 17:42:54.937428 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.004891 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-utilities\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.004980 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.005007 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-catalog-content\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.005027 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghpvl\" (UniqueName: \"kubernetes.io/projected/6c70de64-72e0-4f9a-a819-2c1a683e43b7-kube-api-access-ghpvl\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.005834 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-utilities\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.006426 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.506381637 +0000 UTC m=+141.222174823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.006458 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-catalog-content\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.019374 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.043744 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghpvl\" (UniqueName: \"kubernetes.io/projected/6c70de64-72e0-4f9a-a819-2c1a683e43b7-kube-api-access-ghpvl\") pod \"redhat-marketplace-7m6gs\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.134145 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.134423 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-utilities\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.134484 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-catalog-content\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.134583 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcxvf\" (UniqueName: \"kubernetes.io/projected/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-kube-api-access-rcxvf\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.134791 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.634754554 +0000 UTC m=+141.350547670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.138968 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qxws"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.139500 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6x2ww"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.154556 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x2ww" event={"ID":"f838eabb-c868-4308-ab80-860767b7bf4a","Type":"ContainerStarted","Data":"c0feb019fd14972bde712fe4c9a9ab01ee131fc6854d298bc2ed23085900eda2"} Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.166721 5113 generic.go:358] "Generic (PLEG): container finished" podID="bd0b145b-662b-4a5e-aad9-3b5bdbe7b152" containerID="a8f6c80b6b23ddd72cc334e1d441af302dbdc73e697ac7b536dd4db76020b550" exitCode=0 Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.166800 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" event={"ID":"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152","Type":"ContainerDied","Data":"a8f6c80b6b23ddd72cc334e1d441af302dbdc73e697ac7b536dd4db76020b550"} Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.236755 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rcxvf\" (UniqueName: \"kubernetes.io/projected/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-kube-api-access-rcxvf\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.236846 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.236967 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-utilities\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.237016 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-catalog-content\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.237996 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-catalog-content\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.238641 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.738618054 +0000 UTC m=+141.454411170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.238718 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-utilities\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.256534 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.274191 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.291450 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcxvf\" (UniqueName: \"kubernetes.io/projected/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-kube-api-access-rcxvf\") pod \"redhat-marketplace-5qxws\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.340729 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.341994 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.841969199 +0000 UTC m=+141.557762325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.399517 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-xbdkt" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.399582 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.399665 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-n5msr" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.399689 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d2k64"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.421482 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.422561 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.458886 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.459415 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:55.959397196 +0000 UTC m=+141.675190302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.464902 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.465383 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.465927 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.469222 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2k64"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.531398 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:55 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:55 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:55 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.531507 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.538348 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.563655 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.563858 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff1c11bd-6835-428a-818d-4856377b6cdb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.563955 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98p8v\" (UniqueName: \"kubernetes.io/projected/8be217c9-d60b-4e20-9733-d8011aa40811-kube-api-access-98p8v\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.563994 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff1c11bd-6835-428a-818d-4856377b6cdb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.582184 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.082130899 +0000 UTC m=+141.797924035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.582574 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-catalog-content\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.582651 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-utilities\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.684117 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-catalog-content\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.684602 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-utilities\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.684659 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff1c11bd-6835-428a-818d-4856377b6cdb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.684727 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.684771 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-98p8v\" (UniqueName: \"kubernetes.io/projected/8be217c9-d60b-4e20-9733-d8011aa40811-kube-api-access-98p8v\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.684807 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff1c11bd-6835-428a-818d-4856377b6cdb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.685620 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-catalog-content\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.685734 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-utilities\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.685777 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff1c11bd-6835-428a-818d-4856377b6cdb-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.686093 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.1860767 +0000 UTC m=+141.901869886 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.743134 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-98p8v\" (UniqueName: \"kubernetes.io/projected/8be217c9-d60b-4e20-9733-d8011aa40811-kube-api-access-98p8v\") pod \"redhat-operators-d2k64\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.748638 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff1c11bd-6835-428a-818d-4856377b6cdb-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.751145 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5vsp"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.754935 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wdsj7"] Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.797516 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.797778 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.29775499 +0000 UTC m=+142.013548106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.798144 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.798554 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.799938 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.299925445 +0000 UTC m=+142.015718571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.822147 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:42:55 crc kubenswrapper[5113]: I1208 17:42:55.899665 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:55 crc kubenswrapper[5113]: E1208 17:42:55.900151 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.400125801 +0000 UTC m=+142.115918917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.004719 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.006084 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.506059714 +0000 UTC m=+142.221852840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.107025 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.107588 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.607566913 +0000 UTC m=+142.323360029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.210804 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.211524 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.711505824 +0000 UTC m=+142.427298950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: W1208 17:42:56.232251 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b516ac_7e4a_4d32_9f80_c8ec25504b22.slice/crio-f15f7f0da3a73d59535971aa3f0ec7a285502b15dd71c6108fcc5ec67a211d33 WatchSource:0}: Error finding container f15f7f0da3a73d59535971aa3f0ec7a285502b15dd71c6108fcc5ec67a211d33: Status 404 returned error can't find the container with id f15f7f0da3a73d59535971aa3f0ec7a285502b15dd71c6108fcc5ec67a211d33 Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.311861 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.312050 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.812005568 +0000 UTC m=+142.527798694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.312276 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.312735 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.812691075 +0000 UTC m=+142.528484191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360333 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360386 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"bcb1f207-abaa-42e5-bb62-4ee571918568","Type":"ContainerStarted","Data":"fa20f2db6c27cee7df9ce67f80365d1e668321cb0f424a7b853320d336f09e33"} Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360411 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdsj7"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360423 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hssz4"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360435 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vsp" event={"ID":"d6ee077b-7234-40ba-87fc-f305ca2738e3","Type":"ContainerStarted","Data":"bbe101261d8520d0990091bdefb34ea5aef1aedba757eea838e0d70cedf2f99c"} Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360447 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wng52" event={"ID":"41d40883-8b52-49f7-b408-0d99251bf9f2","Type":"ContainerStarted","Data":"800574a5e489b4723401c3c67ad30d6d3d98fb531fbe899bfb9a39463a1cf260"} Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360460 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wng52"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360470 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hssz4" event={"ID":"344cbec9-b5c3-4662-96d9-d7a1eac85bb7","Type":"ContainerStarted","Data":"38d4839b703d5d102887f5768badfb7da708aa2d4d8a935c886af25c3310d62d"} Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360480 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m6gs"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360496 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qxws"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360507 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360587 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x2vwv" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360605 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2k64"] Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.360643 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.361091 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.413331 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.413668 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-utilities\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.413844 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-catalog-content\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.413870 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4xl\" (UniqueName: \"kubernetes.io/projected/69369de4-4a5e-4f6c-bda2-0ce227331647-kube-api-access-kk4xl\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.414149 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:56.914125943 +0000 UTC m=+142.629919059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.515406 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:56 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:56 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:56 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.515498 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.516098 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-utilities\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.516181 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.516207 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.516243 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-catalog-content\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.516259 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4xl\" (UniqueName: \"kubernetes.io/projected/69369de4-4a5e-4f6c-bda2-0ce227331647-kube-api-access-kk4xl\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.516291 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.517016 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-utilities\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.517306 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-catalog-content\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.517655 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.017641573 +0000 UTC m=+142.733434689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.519092 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.519287 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.530749 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.544779 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4xl\" (UniqueName: \"kubernetes.io/projected/69369de4-4a5e-4f6c-bda2-0ce227331647-kube-api-access-kk4xl\") pod \"redhat-operators-wdsj7\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.545770 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.546621 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.617712 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.618100 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.618154 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.618324 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.118307061 +0000 UTC m=+142.834100187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.622088 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.624823 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.680412 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.719955 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.720455 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.220430166 +0000 UTC m=+142.936223332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.763693 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.783214 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.820729 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-secret-volume\") pod \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.820855 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pt7c\" (UniqueName: \"kubernetes.io/projected/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-kube-api-access-7pt7c\") pod \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.821144 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.821214 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-config-volume\") pod \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\" (UID: \"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152\") " Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.821392 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.821622 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.321598116 +0000 UTC m=+143.037391242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.822251 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd0b145b-662b-4a5e-aad9-3b5bdbe7b152" (UID: "bd0b145b-662b-4a5e-aad9-3b5bdbe7b152"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.834170 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.836138 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd0b145b-662b-4a5e-aad9-3b5bdbe7b152" (UID: "bd0b145b-662b-4a5e-aad9-3b5bdbe7b152"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.838580 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-kube-api-access-7pt7c" (OuterVolumeSpecName: "kube-api-access-7pt7c") pod "bd0b145b-662b-4a5e-aad9-3b5bdbe7b152" (UID: "bd0b145b-662b-4a5e-aad9-3b5bdbe7b152"). InnerVolumeSpecName "kube-api-access-7pt7c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.842323 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a3643f-fbed-4614-a9cb-87b71148c273-metrics-certs\") pod \"network-metrics-daemon-bc5j2\" (UID: \"d0a3643f-fbed-4614-a9cb-87b71148c273\") " pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.869484 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.922799 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.923180 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.923192 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.923202 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7pt7c\" (UniqueName: \"kubernetes.io/projected/bd0b145b-662b-4a5e-aad9-3b5bdbe7b152-kube-api-access-7pt7c\") on node \"crc\" DevicePath \"\"" Dec 08 17:42:56 crc kubenswrapper[5113]: E1208 17:42:56.923450 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.423438724 +0000 UTC m=+143.139231840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.927133 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.950598 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.959589 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bc5j2" Dec 08 17:42:56 crc kubenswrapper[5113]: I1208 17:42:56.974972 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.031877 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.032595 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.532567148 +0000 UTC m=+143.248360264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.133561 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.133981 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.633962695 +0000 UTC m=+143.349755811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.232293 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m6gs" event={"ID":"6c70de64-72e0-4f9a-a819-2c1a683e43b7","Type":"ContainerStarted","Data":"3a88be7d3ae35bbb7880f2ff9b9ac16d649c7cadebf1dd7ed40a3dac9957936b"} Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.234992 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.235167 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.735139455 +0000 UTC m=+143.450932571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.236205 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.236801 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.736779847 +0000 UTC m=+143.452572963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.243706 5113 generic.go:358] "Generic (PLEG): container finished" podID="f838eabb-c868-4308-ab80-860767b7bf4a" containerID="df9f9ae6bc2abba28564a759f0d7a48b417f41024ea9baf7fd255e613cec20c6" exitCode=0 Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.243837 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x2ww" event={"ID":"f838eabb-c868-4308-ab80-860767b7bf4a","Type":"ContainerDied","Data":"df9f9ae6bc2abba28564a759f0d7a48b417f41024ea9baf7fd255e613cec20c6"} Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.246084 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ff1c11bd-6835-428a-818d-4856377b6cdb","Type":"ContainerStarted","Data":"d6b38522d1eb7e57bef0dd53e47a8b0decd111d68e9c0c127c0e5777fb37425a"} Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.248324 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2k64" event={"ID":"8be217c9-d60b-4e20-9733-d8011aa40811","Type":"ContainerStarted","Data":"eee7af9152938f73873e1bd54deeccbf3b5959856e9d7935cf1f2030dd0704ac"} Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.250530 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" event={"ID":"bd0b145b-662b-4a5e-aad9-3b5bdbe7b152","Type":"ContainerDied","Data":"c8e518fe04c6684fc2cfb3ca5924d9695810239a2fc498b61e78215eb9f5f3f9"} Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.250561 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e518fe04c6684fc2cfb3ca5924d9695810239a2fc498b61e78215eb9f5f3f9" Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.250662 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420250-w8vp7" Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.261298 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qxws" event={"ID":"f9b516ac-7e4a-4d32-9f80-c8ec25504b22","Type":"ContainerStarted","Data":"f15f7f0da3a73d59535971aa3f0ec7a285502b15dd71c6108fcc5ec67a211d33"} Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.338134 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.838109862 +0000 UTC m=+143.553902978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.338189 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.338420 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.338970 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.838954234 +0000 UTC m=+143.554747350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.404573 5113 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd0b145b_662b_4a5e_aad9_3b5bdbe7b152.slice/crio-c8e518fe04c6684fc2cfb3ca5924d9695810239a2fc498b61e78215eb9f5f3f9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd0b145b_662b_4a5e_aad9_3b5bdbe7b152.slice\": RecentStats: unable to find data in memory cache]" Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.440348 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.441437 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:57.941418958 +0000 UTC m=+143.657212074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.518843 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:57 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:57 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:57 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.518946 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.529635 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdsj7"] Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.542088 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.667349 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:58.167307252 +0000 UTC m=+143.883100368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.746842 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.747435 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:58.247405973 +0000 UTC m=+143.963199089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.750554 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-rddd2"] Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.760092 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d"] Dec 08 17:42:57 crc kubenswrapper[5113]: I1208 17:42:57.849197 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:57 crc kubenswrapper[5113]: E1208 17:42:57.849680 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:58.349661011 +0000 UTC m=+144.065454127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.149467 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.150070 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:58.650015592 +0000 UTC m=+144.365808728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: W1208 17:42:58.166928 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-9815a4545bdd955c71ddc5e5855fa636f2bd3bfffbc97c54501f81062d54807d WatchSource:0}: Error finding container 9815a4545bdd955c71ddc5e5855fa636f2bd3bfffbc97c54501f81062d54807d: Status 404 returned error can't find the container with id 9815a4545bdd955c71ddc5e5855fa636f2bd3bfffbc97c54501f81062d54807d Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.167103 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bc5j2"] Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.205096 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.272387 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.272933 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:58.772906998 +0000 UTC m=+144.488700114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.369441 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"70bbe3fb1f2bc06e8266ef8931de9d5d924f2159d1c55c9b985d846adcab5de9"} Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.374000 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.374221 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:58.874181372 +0000 UTC m=+144.589974488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.374426 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.375236 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:58.875223128 +0000 UTC m=+144.591016244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.377734 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"9815a4545bdd955c71ddc5e5855fa636f2bd3bfffbc97c54501f81062d54807d"} Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.379046 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" event={"ID":"d0a3643f-fbed-4614-a9cb-87b71148c273","Type":"ContainerStarted","Data":"a9e44917f52aa6b2d9ed7c6b3006e9f0c9e5bbffa7953fcc68fad674fa6572ab"} Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.380189 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdsj7" event={"ID":"69369de4-4a5e-4f6c-bda2-0ce227331647","Type":"ContainerStarted","Data":"9934f6e2cf83209c2cafdc0057d2527a76af4d884982fb6f616b137ac38f92a3"} Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.382397 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"97a1b8ef873ce25c21e8aa3e4ffc4d86cc0d910dbe0790bf753eee66d0f42ab3"} Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.500302 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.500476 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.000445545 +0000 UTC m=+144.716238661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.501236 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.501711 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.001687607 +0000 UTC m=+144.717480723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.517345 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:58 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:58 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:58 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.517437 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.602630 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.602812 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.102782045 +0000 UTC m=+144.818575161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.603482 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.604072 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.104026427 +0000 UTC m=+144.819819553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.704695 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.704946 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.204909731 +0000 UTC m=+144.920702877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.705580 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.705968 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.205951677 +0000 UTC m=+144.921744793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.807229 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.807669 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.307622411 +0000 UTC m=+145.023415527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:58 crc kubenswrapper[5113]: I1208 17:42:58.908834 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:58 crc kubenswrapper[5113]: E1208 17:42:58.909210 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.40919582 +0000 UTC m=+145.124988936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.012937 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.013156 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.513125012 +0000 UTC m=+145.228918128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.013602 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.014057 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.514048995 +0000 UTC m=+145.229842111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.027965 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.028118 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.028184 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.080809 5113 ???:1] "http: TLS handshake error from 192.168.126.11:42344: no serving certificate available for the kubelet" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.115320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.115590 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.615571695 +0000 UTC m=+145.331364801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.170447 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-n9p2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.170547 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-n9p2l" podUID="8cf4b24b-8b34-4e71-b8e8-31fb36974b9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.217004 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.217623 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.717597237 +0000 UTC m=+145.433390353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.276952 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" containerName="controller-manager" containerID="cri-o://b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89" gracePeriod=30 Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.319139 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.319325 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.819291061 +0000 UTC m=+145.535084177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.319768 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.320158 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.820149553 +0000 UTC m=+145.535942669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.391051 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"bcb1f207-abaa-42e5-bb62-4ee571918568","Type":"ContainerStarted","Data":"2ccd789bcd7063d6bf73c06fe12e16a3932f6af99342eade9d87f033289bf212"} Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.392234 5113 generic.go:358] "Generic (PLEG): container finished" podID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerID="7a50db9482b9a63c9f7859a61a3c7ea19b17669bd354c6b9bac01f6568dad44d" exitCode=0 Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.392373 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vsp" event={"ID":"d6ee077b-7234-40ba-87fc-f305ca2738e3","Type":"ContainerDied","Data":"7a50db9482b9a63c9f7859a61a3c7ea19b17669bd354c6b9bac01f6568dad44d"} Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.420963 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.421419 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:42:59.921400846 +0000 UTC m=+145.637193962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.514303 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:42:59 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:42:59 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:42:59 crc kubenswrapper[5113]: healthz check failed Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.514619 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.523002 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.523388 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.023369097 +0000 UTC m=+145.739162213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.624371 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.624533 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.124513127 +0000 UTC m=+145.840306243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.624729 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.625078 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.125069671 +0000 UTC m=+145.840862787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.662167 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"2e9ff9256516366dec5700efec9aa9ea1b5a5e334cabc75b15009630a9f7f12f"} pod="openshift-console/downloads-747b44746d-klln7" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.662271 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" containerID="cri-o://2e9ff9256516366dec5700efec9aa9ea1b5a5e334cabc75b15009630a9f7f12f" gracePeriod=2 Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.662863 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" podUID="17425c96-b772-49f5-8dca-94501ae13766" containerName="route-controller-manager" containerID="cri-o://0dd538565877321c39023adc4ffe8860e82713adbb30fbc59eb10dc32a4bfb10" gracePeriod=30 Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.663078 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.663129 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.668828 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-j9s7b" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.728637 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.729966 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.229927346 +0000 UTC m=+145.945720462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.756728 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.830080 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.830886 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.33085589 +0000 UTC m=+146.046649006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:42:59 crc kubenswrapper[5113]: I1208 17:42:59.931272 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:42:59 crc kubenswrapper[5113]: E1208 17:42:59.931716 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.431700653 +0000 UTC m=+146.147493769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.033167 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.033863 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.533832458 +0000 UTC m=+146.249625574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.141293 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.146296 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.646243886 +0000 UTC m=+146.362037002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.249427 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.249958 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.749940731 +0000 UTC m=+146.465733848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.320017 5113 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.352609 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.352977 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.852899388 +0000 UTC m=+146.568692504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.419648 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.428852 5113 generic.go:358] "Generic (PLEG): container finished" podID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerID="f33cf794bf762044c6db6b82798f2f9a322d27bb049266ba60dc4727ba1d4577" exitCode=0 Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.429662 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m6gs" event={"ID":"6c70de64-72e0-4f9a-a819-2c1a683e43b7","Type":"ContainerDied","Data":"f33cf794bf762044c6db6b82798f2f9a322d27bb049266ba60dc4727ba1d4577"} Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.437334 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" event={"ID":"17535922-286a-4eba-a833-f8feeb9af226","Type":"ContainerStarted","Data":"b8437bb88c4f1d211299b15b18228c47e7e800bf22aaf16875cc0d74184e97d3"} Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.451079 5113 generic.go:358] "Generic (PLEG): container finished" podID="17425c96-b772-49f5-8dca-94501ae13766" containerID="0dd538565877321c39023adc4ffe8860e82713adbb30fbc59eb10dc32a4bfb10" exitCode=0 Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.451379 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" event={"ID":"17425c96-b772-49f5-8dca-94501ae13766","Type":"ContainerDied","Data":"0dd538565877321c39023adc4ffe8860e82713adbb30fbc59eb10dc32a4bfb10"} Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.454515 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.454995 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:00.954976902 +0000 UTC m=+146.670770018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.468429 5113 generic.go:358] "Generic (PLEG): container finished" podID="4ea4d14e-889b-4611-a96e-02f40133e325" containerID="b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89" exitCode=0 Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.468547 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" event={"ID":"4ea4d14e-889b-4611-a96e-02f40133e325","Type":"ContainerDied","Data":"b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89"} Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.468584 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" event={"ID":"4ea4d14e-889b-4611-a96e-02f40133e325","Type":"ContainerDied","Data":"6966b07ba14ba79a94824f69b1645c1bcd5589b86114a7870ce9afe8e27205a1"} Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.468602 5113 scope.go:117] "RemoveContainer" containerID="b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.468774 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-rddd2" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.475150 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d979478f4-zqjhw"] Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.476631 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bd0b145b-662b-4a5e-aad9-3b5bdbe7b152" containerName="collect-profiles" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.476657 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd0b145b-662b-4a5e-aad9-3b5bdbe7b152" containerName="collect-profiles" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.476687 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" containerName="controller-manager" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.476710 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" containerName="controller-manager" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.476854 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" containerName="controller-manager" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.476893 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="bd0b145b-662b-4a5e-aad9-3b5bdbe7b152" containerName="collect-profiles" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.485304 5113 generic.go:358] "Generic (PLEG): container finished" podID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerID="2e9ff9256516366dec5700efec9aa9ea1b5a5e334cabc75b15009630a9f7f12f" exitCode=0 Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.488306 5113 generic.go:358] "Generic (PLEG): container finished" podID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerID="6209f18517b12bef0df85d54b55f95739b52a54689c534fa560b9824ab5cdf57" exitCode=0 Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.491006 5113 generic.go:358] "Generic (PLEG): container finished" podID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerID="acce3cac6dc9108e2fff9cd61451999e485ef970ad5808734b8a513b46b0c9b6" exitCode=0 Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.516236 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:00 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:00 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:00 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.516640 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.555574 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4d14e-889b-4611-a96e-02f40133e325-serving-cert\") pod \"4ea4d14e-889b-4611-a96e-02f40133e325\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.557321 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-config\") pod \"4ea4d14e-889b-4611-a96e-02f40133e325\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.557416 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbmph\" (UniqueName: \"kubernetes.io/projected/4ea4d14e-889b-4611-a96e-02f40133e325-kube-api-access-vbmph\") pod \"4ea4d14e-889b-4611-a96e-02f40133e325\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.557504 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ea4d14e-889b-4611-a96e-02f40133e325-tmp\") pod \"4ea4d14e-889b-4611-a96e-02f40133e325\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.557557 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-proxy-ca-bundles\") pod \"4ea4d14e-889b-4611-a96e-02f40133e325\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.557671 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.557752 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-client-ca\") pod \"4ea4d14e-889b-4611-a96e-02f40133e325\" (UID: \"4ea4d14e-889b-4611-a96e-02f40133e325\") " Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.558258 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.058225315 +0000 UTC m=+146.774018431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.558435 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.558278 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ea4d14e-889b-4611-a96e-02f40133e325-tmp" (OuterVolumeSpecName: "tmp") pod "4ea4d14e-889b-4611-a96e-02f40133e325" (UID: "4ea4d14e-889b-4611-a96e-02f40133e325"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.558796 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ea4d14e-889b-4611-a96e-02f40133e325-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.558995 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-config" (OuterVolumeSpecName: "config") pod "4ea4d14e-889b-4611-a96e-02f40133e325" (UID: "4ea4d14e-889b-4611-a96e-02f40133e325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.559072 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ea4d14e-889b-4611-a96e-02f40133e325" (UID: "4ea4d14e-889b-4611-a96e-02f40133e325"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.559186 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.05917013 +0000 UTC m=+146.774963316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.559385 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4ea4d14e-889b-4611-a96e-02f40133e325" (UID: "4ea4d14e-889b-4611-a96e-02f40133e325"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.564235 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea4d14e-889b-4611-a96e-02f40133e325-kube-api-access-vbmph" (OuterVolumeSpecName: "kube-api-access-vbmph") pod "4ea4d14e-889b-4611-a96e-02f40133e325" (UID: "4ea4d14e-889b-4611-a96e-02f40133e325"). InnerVolumeSpecName "kube-api-access-vbmph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.564314 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea4d14e-889b-4611-a96e-02f40133e325-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ea4d14e-889b-4611-a96e-02f40133e325" (UID: "4ea4d14e-889b-4611-a96e-02f40133e325"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.660380 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.660665 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbmph\" (UniqueName: \"kubernetes.io/projected/4ea4d14e-889b-4611-a96e-02f40133e325-kube-api-access-vbmph\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.660689 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.660697 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.660707 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea4d14e-889b-4611-a96e-02f40133e325-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.660715 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea4d14e-889b-4611-a96e-02f40133e325-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.660781 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.160750481 +0000 UTC m=+146.876543597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.762766 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.763375 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.263351038 +0000 UTC m=+146.979144154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.906873 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.907404 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.407381076 +0000 UTC m=+147.123174192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.933048 5113 scope.go:117] "RemoveContainer" containerID="b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89" Dec 08 17:43:00 crc kubenswrapper[5113]: E1208 17:43:00.935233 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89\": container with ID starting with b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89 not found: ID does not exist" containerID="b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89" Dec 08 17:43:00 crc kubenswrapper[5113]: I1208 17:43:00.935293 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89"} err="failed to get container status \"b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89\": rpc error: code = NotFound desc = could not find container \"b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89\": container with ID starting with b42a1549840a156132e8ce2980799394c5cc8fc216b0d412eb787acbc4ae0b89 not found: ID does not exist" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.009500 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:01 crc kubenswrapper[5113]: E1208 17:43:01.009941 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.509918951 +0000 UTC m=+147.225712067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.113280 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:01 crc kubenswrapper[5113]: E1208 17:43:01.115096 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.613868673 +0000 UTC m=+147.329661779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.211534 5113 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T17:43:00.320083798Z","UUID":"c401243c-dfd1-4be0-8bff-5cda28a472e2","Handler":null,"Name":"","Endpoint":""} Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.214474 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:01 crc kubenswrapper[5113]: E1208 17:43:01.214973 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 17:43:01.714953982 +0000 UTC m=+147.430747098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-r9xfs" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.220284 5113 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.220384 5113 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.316251 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.327352 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:43:01 crc kubenswrapper[5113]: E1208 17:43:01.360224 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:01 crc kubenswrapper[5113]: E1208 17:43:01.362909 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:01 crc kubenswrapper[5113]: E1208 17:43:01.365711 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:01 crc kubenswrapper[5113]: E1208 17:43:01.365817 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.418506 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.423056 5113 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.423116 5113 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.450728 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-r9xfs\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.464930 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.473697 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.504151 5113 generic.go:358] "Generic (PLEG): container finished" podID="bcb1f207-abaa-42e5-bb62-4ee571918568" containerID="2ccd789bcd7063d6bf73c06fe12e16a3932f6af99342eade9d87f033289bf212" exitCode=0 Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.516057 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:01 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:01 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:01 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.516155 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.574840 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d979478f4-zqjhw"] Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.578310 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.615420 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.615927 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.616526 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.616714 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.616901 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.617055 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.619723 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.636624 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.637953 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerDied","Data":"2e9ff9256516366dec5700efec9aa9ea1b5a5e334cabc75b15009630a9f7f12f"} Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.638086 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wng52" event={"ID":"41d40883-8b52-49f7-b408-0d99251bf9f2","Type":"ContainerDied","Data":"6209f18517b12bef0df85d54b55f95739b52a54689c534fa560b9824ab5cdf57"} Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.638183 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hssz4" event={"ID":"344cbec9-b5c3-4662-96d9-d7a1eac85bb7","Type":"ContainerDied","Data":"acce3cac6dc9108e2fff9cd61451999e485ef970ad5808734b8a513b46b0c9b6"} Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.638255 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-rddd2"] Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.638334 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-rddd2"] Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.638413 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"bcb1f207-abaa-42e5-bb62-4ee571918568","Type":"ContainerDied","Data":"2ccd789bcd7063d6bf73c06fe12e16a3932f6af99342eade9d87f033289bf212"} Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.722533 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-proxy-ca-bundles\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.722661 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4fb66f3-9cc5-4140-a02d-64337bf308a6-serving-cert\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.722743 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k89nr\" (UniqueName: \"kubernetes.io/projected/f4fb66f3-9cc5-4140-a02d-64337bf308a6-kube-api-access-k89nr\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.728931 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-config\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.729075 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-client-ca\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.729127 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4fb66f3-9cc5-4140-a02d-64337bf308a6-tmp\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.838556 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-config\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.838645 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-client-ca\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.838687 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4fb66f3-9cc5-4140-a02d-64337bf308a6-tmp\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.838772 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-proxy-ca-bundles\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.838833 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4fb66f3-9cc5-4140-a02d-64337bf308a6-serving-cert\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.838899 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k89nr\" (UniqueName: \"kubernetes.io/projected/f4fb66f3-9cc5-4140-a02d-64337bf308a6-kube-api-access-k89nr\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.843790 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4fb66f3-9cc5-4140-a02d-64337bf308a6-tmp\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.844013 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-proxy-ca-bundles\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.845104 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-client-ca\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.886349 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-config\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.896370 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4fb66f3-9cc5-4140-a02d-64337bf308a6-serving-cert\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.897720 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k89nr\" (UniqueName: \"kubernetes.io/projected/f4fb66f3-9cc5-4140-a02d-64337bf308a6-kube-api-access-k89nr\") pod \"controller-manager-d979478f4-zqjhw\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:01 crc kubenswrapper[5113]: I1208 17:43:01.994789 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.416954 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=9.416933188 podStartE2EDuration="9.416933188s" podCreationTimestamp="2025-12-08 17:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:01.92021048 +0000 UTC m=+147.636003616" watchObservedRunningTime="2025-12-08 17:43:02.416933188 +0000 UTC m=+148.132726304" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.418861 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-r9xfs"] Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.524376 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:02 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:02 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:02 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.525049 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.717273 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea4d14e-889b-4611-a96e-02f40133e325" path="/var/lib/kubelet/pods/4ea4d14e-889b-4611-a96e-02f40133e325/volumes" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.717941 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" event={"ID":"17425c96-b772-49f5-8dca-94501ae13766","Type":"ContainerDied","Data":"e9f743ac10b66012b82814167ddc3b2bc4f86a4e42603d59a96ac3d61b206c2f"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.717977 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f743ac10b66012b82814167ddc3b2bc4f86a4e42603d59a96ac3d61b206c2f" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.717989 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ff1c11bd-6835-428a-818d-4856377b6cdb","Type":"ContainerStarted","Data":"d9e216f233d02db44f98a50f18c01229a87af07b533a1631258aa16640b088d5"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.724109 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"58075ff45f7f86a6d1ada55b5ce775298d3db941bce5c181516ab3bf5e51877d"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.724437 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.725252 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.730072 5113 generic.go:358] "Generic (PLEG): container finished" podID="8be217c9-d60b-4e20-9733-d8011aa40811" containerID="aabbbdd34b56782314e6c887b022f558a27e610e3e24fd6a7f86173456df397d" exitCode=0 Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.730192 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2k64" event={"ID":"8be217c9-d60b-4e20-9733-d8011aa40811","Type":"ContainerDied","Data":"aabbbdd34b56782314e6c887b022f558a27e610e3e24fd6a7f86173456df397d"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.733314 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"0110b01b542953937f6618953a0858751a10906d026cc722749c983286060e6a"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.737671 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" event={"ID":"d0a3643f-fbed-4614-a9cb-87b71148c273","Type":"ContainerStarted","Data":"8b7a4728f756f13c176ea1809ce8c7448ad3def2cb369df15324c12fcb4cdc10"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.742522 5113 generic.go:358] "Generic (PLEG): container finished" podID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerID="93fba2c822fd5152c77eebc6ea5438b74e646cfe4d4c2a1aecc4fb39e90f8502" exitCode=0 Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.742627 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdsj7" event={"ID":"69369de4-4a5e-4f6c-bda2-0ce227331647","Type":"ContainerDied","Data":"93fba2c822fd5152c77eebc6ea5438b74e646cfe4d4c2a1aecc4fb39e90f8502"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.743718 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=7.743693046 podStartE2EDuration="7.743693046s" podCreationTimestamp="2025-12-08 17:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:02.741594512 +0000 UTC m=+148.457387628" watchObservedRunningTime="2025-12-08 17:43:02.743693046 +0000 UTC m=+148.459486172" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.780301 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4ca3f414886ec257bb1886a631871dbef356749aa42dbfc185f4a507075f313c"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.789339 5113 generic.go:358] "Generic (PLEG): container finished" podID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerID="cb0929f500d74ebc0e9500e91f2225076bd34ef43d387b00e2525c20bd8f02e0" exitCode=0 Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.789660 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qxws" event={"ID":"f9b516ac-7e4a-4d32-9f80-c8ec25504b22","Type":"ContainerDied","Data":"cb0929f500d74ebc0e9500e91f2225076bd34ef43d387b00e2525c20bd8f02e0"} Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.831682 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t"] Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.832673 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17425c96-b772-49f5-8dca-94501ae13766" containerName="route-controller-manager" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.832697 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="17425c96-b772-49f5-8dca-94501ae13766" containerName="route-controller-manager" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.832833 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="17425c96-b772-49f5-8dca-94501ae13766" containerName="route-controller-manager" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.924637 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-config\") pod \"17425c96-b772-49f5-8dca-94501ae13766\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.925325 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-client-ca\") pod \"17425c96-b772-49f5-8dca-94501ae13766\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.925417 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9sc7\" (UniqueName: \"kubernetes.io/projected/17425c96-b772-49f5-8dca-94501ae13766-kube-api-access-k9sc7\") pod \"17425c96-b772-49f5-8dca-94501ae13766\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.925443 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17425c96-b772-49f5-8dca-94501ae13766-tmp\") pod \"17425c96-b772-49f5-8dca-94501ae13766\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.925492 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17425c96-b772-49f5-8dca-94501ae13766-serving-cert\") pod \"17425c96-b772-49f5-8dca-94501ae13766\" (UID: \"17425c96-b772-49f5-8dca-94501ae13766\") " Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.926859 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-client-ca" (OuterVolumeSpecName: "client-ca") pod "17425c96-b772-49f5-8dca-94501ae13766" (UID: "17425c96-b772-49f5-8dca-94501ae13766"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.927134 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17425c96-b772-49f5-8dca-94501ae13766-tmp" (OuterVolumeSpecName: "tmp") pod "17425c96-b772-49f5-8dca-94501ae13766" (UID: "17425c96-b772-49f5-8dca-94501ae13766"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.927635 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-config" (OuterVolumeSpecName: "config") pod "17425c96-b772-49f5-8dca-94501ae13766" (UID: "17425c96-b772-49f5-8dca-94501ae13766"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.947904 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17425c96-b772-49f5-8dca-94501ae13766-kube-api-access-k9sc7" (OuterVolumeSpecName: "kube-api-access-k9sc7") pod "17425c96-b772-49f5-8dca-94501ae13766" (UID: "17425c96-b772-49f5-8dca-94501ae13766"). InnerVolumeSpecName "kube-api-access-k9sc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:02 crc kubenswrapper[5113]: I1208 17:43:02.955256 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17425c96-b772-49f5-8dca-94501ae13766-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17425c96-b772-49f5-8dca-94501ae13766" (UID: "17425c96-b772-49f5-8dca-94501ae13766"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.026952 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.026994 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k9sc7\" (UniqueName: \"kubernetes.io/projected/17425c96-b772-49f5-8dca-94501ae13766-kube-api-access-k9sc7\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.027007 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/17425c96-b772-49f5-8dca-94501ae13766-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.027018 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17425c96-b772-49f5-8dca-94501ae13766-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.027029 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17425c96-b772-49f5-8dca-94501ae13766-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.516064 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:03 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:03 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:03 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.516163 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.677992 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t"] Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.678639 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d979478f4-zqjhw"] Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.678370 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.828107 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" event={"ID":"17535922-286a-4eba-a833-f8feeb9af226","Type":"ContainerStarted","Data":"d226f0529217d0b2bfc48f3bf1b1af9275654422e8bc0115e21d2c2528903e99"} Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.831362 5113 generic.go:358] "Generic (PLEG): container finished" podID="ff1c11bd-6835-428a-818d-4856377b6cdb" containerID="d9e216f233d02db44f98a50f18c01229a87af07b533a1631258aa16640b088d5" exitCode=0 Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.831430 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ff1c11bd-6835-428a-818d-4856377b6cdb","Type":"ContainerDied","Data":"d9e216f233d02db44f98a50f18c01229a87af07b533a1631258aa16640b088d5"} Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.839475 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" event={"ID":"401e85c2-a1e6-4642-80cf-23e461cef995","Type":"ContainerStarted","Data":"0083c20293c97010ff9040f46107b86528768f11ead7c88f45b28058f5b9cf2a"} Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.839737 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-config\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.839804 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkcws\" (UniqueName: \"kubernetes.io/projected/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-kube-api-access-gkcws\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.839885 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-serving-cert\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.840349 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-tmp\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.840493 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-client-ca\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.849329 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerStarted","Data":"4e224fc2a91768c03e8c25bc87fd1989a0e510734bda2845b8c8af8b1e3234b6"} Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.857880 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bc5j2" event={"ID":"d0a3643f-fbed-4614-a9cb-87b71148c273","Type":"ContainerStarted","Data":"78267ef5db992b20934116a351d66a40c6a9e267fb84c58fef3bc8f5b898bd12"} Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.863981 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" event={"ID":"f4fb66f3-9cc5-4140-a02d-64337bf308a6","Type":"ContainerStarted","Data":"abe143a0fe4235ac1a568ca3b13de6e1e60e5b0887d66b1d9f1662269db865fa"} Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.864167 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.872090 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.872114 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.872150 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.942139 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-config\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.942230 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gkcws\" (UniqueName: \"kubernetes.io/projected/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-kube-api-access-gkcws\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.942287 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-serving-cert\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.942351 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-tmp\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.942448 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-client-ca\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.943994 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-client-ca\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.945712 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-tmp\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.947631 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-config\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.958812 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-serving-cert\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:03 crc kubenswrapper[5113]: I1208 17:43:03.969075 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkcws\" (UniqueName: \"kubernetes.io/projected/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-kube-api-access-gkcws\") pod \"route-controller-manager-67675f8b7c-bxm5t\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.021853 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.268880 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d"] Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.274055 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-prc9d"] Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.555011 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:04 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:04 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:04 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.555738 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.720506 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17425c96-b772-49f5-8dca-94501ae13766" path="/var/lib/kubelet/pods/17425c96-b772-49f5-8dca-94501ae13766/volumes" Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.923384 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" event={"ID":"401e85c2-a1e6-4642-80cf-23e461cef995","Type":"ContainerStarted","Data":"18e9efd04ea4f44e72b9e8240ced2f93470579db03d4a422b8ba77cf0cf98ed7"} Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.924203 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.927430 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" event={"ID":"f4fb66f3-9cc5-4140-a02d-64337bf308a6","Type":"ContainerStarted","Data":"a831cbfb8745ddbce90f398757f712581816e26c45c93a6b778030fb4f8300d8"} Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.927471 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.928832 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:04 crc kubenswrapper[5113]: I1208 17:43:04.928894 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.128229 5113 patch_prober.go:28] interesting pod/controller-manager-d979478f4-zqjhw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.128687 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.141776 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t"] Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.143716 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" podStartSLOduration=128.14369365 podStartE2EDuration="2m8.14369365s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:05.141677458 +0000 UTC m=+150.857470574" watchObservedRunningTime="2025-12-08 17:43:05.14369365 +0000 UTC m=+150.859486766" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.161482 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:43:05 crc kubenswrapper[5113]: W1208 17:43:05.167684 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0000dfb7_284c_4aad_9ce7_cf9c65b09a0e.slice/crio-27d9c4e1bb10ae522fddb7c9a0579d346c8f1466f5bf62a98673bc1c11ea5f67 WatchSource:0}: Error finding container 27d9c4e1bb10ae522fddb7c9a0579d346c8f1466f5bf62a98673bc1c11ea5f67: Status 404 returned error can't find the container with id 27d9c4e1bb10ae522fddb7c9a0579d346c8f1466f5bf62a98673bc1c11ea5f67 Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.183202 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" podStartSLOduration=8.183172761 podStartE2EDuration="8.183172761s" podCreationTimestamp="2025-12-08 17:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:05.172931758 +0000 UTC m=+150.888724874" watchObservedRunningTime="2025-12-08 17:43:05.183172761 +0000 UTC m=+150.898965877" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.196206 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bc5j2" podStartSLOduration=128.196178434 podStartE2EDuration="2m8.196178434s" podCreationTimestamp="2025-12-08 17:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:05.190270292 +0000 UTC m=+150.906063438" watchObservedRunningTime="2025-12-08 17:43:05.196178434 +0000 UTC m=+150.911971550" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.334109 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb1f207-abaa-42e5-bb62-4ee571918568-kubelet-dir\") pod \"bcb1f207-abaa-42e5-bb62-4ee571918568\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.334277 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb1f207-abaa-42e5-bb62-4ee571918568-kube-api-access\") pod \"bcb1f207-abaa-42e5-bb62-4ee571918568\" (UID: \"bcb1f207-abaa-42e5-bb62-4ee571918568\") " Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.334769 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcb1f207-abaa-42e5-bb62-4ee571918568-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bcb1f207-abaa-42e5-bb62-4ee571918568" (UID: "bcb1f207-abaa-42e5-bb62-4ee571918568"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.400052 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb1f207-abaa-42e5-bb62-4ee571918568-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bcb1f207-abaa-42e5-bb62-4ee571918568" (UID: "bcb1f207-abaa-42e5-bb62-4ee571918568"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.436097 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb1f207-abaa-42e5-bb62-4ee571918568-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.436147 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb1f207-abaa-42e5-bb62-4ee571918568-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.527869 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:05 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:05 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:05 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.527968 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.990463 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.990813 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"bcb1f207-abaa-42e5-bb62-4ee571918568","Type":"ContainerDied","Data":"fa20f2db6c27cee7df9ce67f80365d1e668321cb0f424a7b853320d336f09e33"} Dec 08 17:43:05 crc kubenswrapper[5113]: I1208 17:43:05.991326 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa20f2db6c27cee7df9ce67f80365d1e668321cb0f424a7b853320d336f09e33" Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.002197 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" event={"ID":"17535922-286a-4eba-a833-f8feeb9af226","Type":"ContainerStarted","Data":"97e43b1ec64414f9c8224caa3f2baf47ebade2c8c1097b9eff6653e89541297a"} Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.010854 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" event={"ID":"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e","Type":"ContainerStarted","Data":"e683d2fa9449311f6e3b9727cb035ac6916d57c5b7987e152b9d757c0f444f6b"} Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.010946 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.010971 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" event={"ID":"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e","Type":"ContainerStarted","Data":"27d9c4e1bb10ae522fddb7c9a0579d346c8f1466f5bf62a98673bc1c11ea5f67"} Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.014923 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.014979 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.068526 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" podStartSLOduration=9.068485139 podStartE2EDuration="9.068485139s" podCreationTimestamp="2025-12-08 17:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:06.067316679 +0000 UTC m=+151.783109815" watchObservedRunningTime="2025-12-08 17:43:06.068485139 +0000 UTC m=+151.784278265" Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.078489 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-nfj76" podStartSLOduration=39.078459984 podStartE2EDuration="39.078459984s" podCreationTimestamp="2025-12-08 17:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:06.046908406 +0000 UTC m=+151.762701532" watchObservedRunningTime="2025-12-08 17:43:06.078459984 +0000 UTC m=+151.794253100" Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.102223 5113 patch_prober.go:28] interesting pod/route-controller-manager-67675f8b7c-bxm5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.102325 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.157195 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.514175 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:06 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:06 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:06 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:06 crc kubenswrapper[5113]: I1208 17:43:06.514276 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:07 crc kubenswrapper[5113]: I1208 17:43:07.170249 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:07 crc kubenswrapper[5113]: I1208 17:43:07.818319 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:07 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:07 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:07 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:07 crc kubenswrapper[5113]: I1208 17:43:07.818392 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:08 crc kubenswrapper[5113]: I1208 17:43:08.515370 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:08 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:08 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:08 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:08 crc kubenswrapper[5113]: I1208 17:43:08.515493 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:09 crc kubenswrapper[5113]: I1208 17:43:09.028806 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:09 crc kubenswrapper[5113]: I1208 17:43:09.029129 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:09 crc kubenswrapper[5113]: I1208 17:43:09.171007 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-n9p2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 08 17:43:09 crc kubenswrapper[5113]: I1208 17:43:09.171119 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-n9p2l" podUID="8cf4b24b-8b34-4e71-b8e8-31fb36974b9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 08 17:43:09 crc kubenswrapper[5113]: I1208 17:43:09.516124 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:09 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:09 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:09 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:09 crc kubenswrapper[5113]: I1208 17:43:09.516296 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:10 crc kubenswrapper[5113]: I1208 17:43:10.589993 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:10 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:10 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:10 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:10 crc kubenswrapper[5113]: I1208 17:43:10.590581 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:11 crc kubenswrapper[5113]: E1208 17:43:11.240343 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:11 crc kubenswrapper[5113]: E1208 17:43:11.241992 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:11 crc kubenswrapper[5113]: E1208 17:43:11.243302 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:11 crc kubenswrapper[5113]: E1208 17:43:11.243346 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:43:11 crc kubenswrapper[5113]: I1208 17:43:11.542892 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:11 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:11 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:11 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:11 crc kubenswrapper[5113]: I1208 17:43:11.543000 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:12 crc kubenswrapper[5113]: I1208 17:43:12.554769 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:12 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:12 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:12 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:12 crc kubenswrapper[5113]: I1208 17:43:12.554873 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:13 crc kubenswrapper[5113]: I1208 17:43:13.514275 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:13 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:13 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:13 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:13 crc kubenswrapper[5113]: I1208 17:43:13.514495 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:14 crc kubenswrapper[5113]: I1208 17:43:14.515579 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:14 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:14 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:14 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:14 crc kubenswrapper[5113]: I1208 17:43:14.515716 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.343232 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-5hq9p_ffa3574d-c847-4258-b8f3-7a044a52f07b/kube-multus-additional-cni-plugins/0.log" Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.343325 5113 generic.go:358] "Generic (PLEG): container finished" podID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" exitCode=137 Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.344201 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" event={"ID":"ffa3574d-c847-4258-b8f3-7a044a52f07b","Type":"ContainerDied","Data":"84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653"} Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.442226 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d979478f4-zqjhw"] Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.451555 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" containerID="cri-o://a831cbfb8745ddbce90f398757f712581816e26c45c93a6b778030fb4f8300d8" gracePeriod=30 Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.466827 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t"] Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.467150 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerName="route-controller-manager" containerID="cri-o://e683d2fa9449311f6e3b9727cb035ac6916d57c5b7987e152b9d757c0f444f6b" gracePeriod=30 Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.516327 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kjgph container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 17:43:15 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Dec 08 17:43:15 crc kubenswrapper[5113]: [+]process-running ok Dec 08 17:43:15 crc kubenswrapper[5113]: healthz check failed Dec 08 17:43:15 crc kubenswrapper[5113]: I1208 17:43:15.516444 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" podUID="be1fc1ba-3184-4f3c-b7fd-e2d14e3ca3fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:43:16 crc kubenswrapper[5113]: I1208 17:43:16.010406 5113 patch_prober.go:28] interesting pod/controller-manager-d979478f4-zqjhw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 08 17:43:16 crc kubenswrapper[5113]: I1208 17:43:16.010481 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 08 17:43:16 crc kubenswrapper[5113]: I1208 17:43:16.018870 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:16 crc kubenswrapper[5113]: I1208 17:43:16.018942 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:16 crc kubenswrapper[5113]: I1208 17:43:16.516911 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:43:16 crc kubenswrapper[5113]: I1208 17:43:16.522565 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-kjgph" Dec 08 17:43:17 crc kubenswrapper[5113]: I1208 17:43:17.089174 5113 patch_prober.go:28] interesting pod/route-controller-manager-67675f8b7c-bxm5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Dec 08 17:43:17 crc kubenswrapper[5113]: I1208 17:43:17.089273 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.028367 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.028937 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.171322 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-n9p2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.171413 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-n9p2l" podUID="8cf4b24b-8b34-4e71-b8e8-31fb36974b9a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.377332 5113 generic.go:358] "Generic (PLEG): container finished" podID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerID="e683d2fa9449311f6e3b9727cb035ac6916d57c5b7987e152b9d757c0f444f6b" exitCode=0 Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.377453 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" event={"ID":"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e","Type":"ContainerDied","Data":"e683d2fa9449311f6e3b9727cb035ac6916d57c5b7987e152b9d757c0f444f6b"} Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.587193 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54820: no serving certificate available for the kubelet" Dec 08 17:43:19 crc kubenswrapper[5113]: I1208 17:43:19.756427 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-kzjxt" Dec 08 17:43:20 crc kubenswrapper[5113]: I1208 17:43:20.390659 5113 generic.go:358] "Generic (PLEG): container finished" podID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerID="a831cbfb8745ddbce90f398757f712581816e26c45c93a6b778030fb4f8300d8" exitCode=0 Dec 08 17:43:20 crc kubenswrapper[5113]: I1208 17:43:20.390828 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" event={"ID":"f4fb66f3-9cc5-4140-a02d-64337bf308a6","Type":"ContainerDied","Data":"a831cbfb8745ddbce90f398757f712581816e26c45c93a6b778030fb4f8300d8"} Dec 08 17:43:21 crc kubenswrapper[5113]: E1208 17:43:21.182691 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:21 crc kubenswrapper[5113]: E1208 17:43:21.184674 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:21 crc kubenswrapper[5113]: E1208 17:43:21.185130 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:21 crc kubenswrapper[5113]: E1208 17:43:21.185275 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:43:26 crc kubenswrapper[5113]: I1208 17:43:26.009774 5113 patch_prober.go:28] interesting pod/controller-manager-d979478f4-zqjhw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 08 17:43:26 crc kubenswrapper[5113]: I1208 17:43:26.010286 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 08 17:43:26 crc kubenswrapper[5113]: I1208 17:43:26.015084 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:26 crc kubenswrapper[5113]: I1208 17:43:26.015179 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:26 crc kubenswrapper[5113]: I1208 17:43:26.019382 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.104066 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.104868 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bcb1f207-abaa-42e5-bb62-4ee571918568" containerName="pruner" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.104886 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb1f207-abaa-42e5-bb62-4ee571918568" containerName="pruner" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.105066 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="bcb1f207-abaa-42e5-bb62-4ee571918568" containerName="pruner" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.356209 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.356528 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.359310 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.360708 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.463118 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.463258 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.565203 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.565289 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.565400 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.591918 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:27 crc kubenswrapper[5113]: I1208 17:43:27.679743 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:28 crc kubenswrapper[5113]: I1208 17:43:28.091251 5113 patch_prober.go:28] interesting pod/route-controller-manager-67675f8b7c-bxm5t container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:43:28 crc kubenswrapper[5113]: I1208 17:43:28.091373 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.027937 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.028089 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.028176 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.028893 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.028931 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.029049 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4e224fc2a91768c03e8c25bc87fd1989a0e510734bda2845b8c8af8b1e3234b6"} pod="openshift-console/downloads-747b44746d-klln7" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.029097 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" containerID="cri-o://4e224fc2a91768c03e8c25bc87fd1989a0e510734bda2845b8c8af8b1e3234b6" gracePeriod=2 Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.857364 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:43:29 crc kubenswrapper[5113]: I1208 17:43:29.866596 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-n9p2l" Dec 08 17:43:30 crc kubenswrapper[5113]: I1208 17:43:30.466324 5113 generic.go:358] "Generic (PLEG): container finished" podID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerID="4e224fc2a91768c03e8c25bc87fd1989a0e510734bda2845b8c8af8b1e3234b6" exitCode=0 Dec 08 17:43:30 crc kubenswrapper[5113]: I1208 17:43:30.467453 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerDied","Data":"4e224fc2a91768c03e8c25bc87fd1989a0e510734bda2845b8c8af8b1e3234b6"} Dec 08 17:43:30 crc kubenswrapper[5113]: I1208 17:43:30.467505 5113 scope.go:117] "RemoveContainer" containerID="2e9ff9256516366dec5700efec9aa9ea1b5a5e334cabc75b15009630a9f7f12f" Dec 08 17:43:31 crc kubenswrapper[5113]: E1208 17:43:31.182894 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:31 crc kubenswrapper[5113]: E1208 17:43:31.183646 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:31 crc kubenswrapper[5113]: E1208 17:43:31.184031 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 17:43:31 crc kubenswrapper[5113]: E1208 17:43:31.184158 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 17:43:32 crc kubenswrapper[5113]: I1208 17:43:32.301382 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.381202 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.381521 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.392626 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.427479 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.429642 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kube-api-access\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.429743 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-var-lock\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.495304 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.500159 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" event={"ID":"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e","Type":"ContainerDied","Data":"27d9c4e1bb10ae522fddb7c9a0579d346c8f1466f5bf62a98673bc1c11ea5f67"} Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.522652 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl"] Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.524656 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerName="route-controller-manager" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.524709 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerName="route-controller-manager" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.524819 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" containerName="route-controller-manager" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531011 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-serving-cert\") pod \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531152 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-client-ca\") pod \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531236 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-config\") pod \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531332 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-tmp\") pod \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531448 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkcws\" (UniqueName: \"kubernetes.io/projected/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-kube-api-access-gkcws\") pod \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\" (UID: \"0000dfb7-284c-4aad-9ce7-cf9c65b09a0e\") " Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531673 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kube-api-access\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531725 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-var-lock\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.531757 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.533451 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" (UID: "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.533544 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-config" (OuterVolumeSpecName: "config") pod "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" (UID: "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.533675 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-var-lock\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.534299 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kubelet-dir\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.535748 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-tmp" (OuterVolumeSpecName: "tmp") pod "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" (UID: "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.536220 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.556354 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" (UID: "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.559080 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kube-api-access\") pod \"installer-12-crc\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.560604 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-kube-api-access-gkcws" (OuterVolumeSpecName: "kube-api-access-gkcws") pod "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" (UID: "0000dfb7-284c-4aad-9ce7-cf9c65b09a0e"). InnerVolumeSpecName "kube-api-access-gkcws". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.565324 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl"] Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.633144 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-serving-cert\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.633226 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bd5t\" (UniqueName: \"kubernetes.io/projected/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-kube-api-access-2bd5t\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.633265 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-client-ca\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.633286 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-tmp\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.633307 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-config\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.633777 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.637461 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.637504 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.637515 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gkcws\" (UniqueName: \"kubernetes.io/projected/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-kube-api-access-gkcws\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.637527 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.739477 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-serving-cert\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.739556 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2bd5t\" (UniqueName: \"kubernetes.io/projected/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-kube-api-access-2bd5t\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.739601 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-client-ca\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.739633 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-tmp\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.739655 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-config\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.740964 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-tmp\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.741900 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-client-ca\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.742001 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-config\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.765645 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bd5t\" (UniqueName: \"kubernetes.io/projected/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-kube-api-access-2bd5t\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.785025 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.863220 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-serving-cert\") pod \"route-controller-manager-544d4d5ff5-bshnl\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:34 crc kubenswrapper[5113]: I1208 17:43:34.914163 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:35 crc kubenswrapper[5113]: I1208 17:43:35.505884 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t" Dec 08 17:43:35 crc kubenswrapper[5113]: I1208 17:43:35.540759 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t"] Dec 08 17:43:35 crc kubenswrapper[5113]: I1208 17:43:35.540846 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67675f8b7c-bxm5t"] Dec 08 17:43:36 crc kubenswrapper[5113]: I1208 17:43:36.010785 5113 patch_prober.go:28] interesting pod/controller-manager-d979478f4-zqjhw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Dec 08 17:43:36 crc kubenswrapper[5113]: I1208 17:43:36.010886 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Dec 08 17:43:36 crc kubenswrapper[5113]: I1208 17:43:36.689628 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0000dfb7-284c-4aad-9ce7-cf9c65b09a0e" path="/var/lib/kubelet/pods/0000dfb7-284c-4aad-9ce7-cf9c65b09a0e/volumes" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.062064 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.126317 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff1c11bd-6835-428a-818d-4856377b6cdb-kubelet-dir\") pod \"ff1c11bd-6835-428a-818d-4856377b6cdb\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.126488 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff1c11bd-6835-428a-818d-4856377b6cdb-kube-api-access\") pod \"ff1c11bd-6835-428a-818d-4856377b6cdb\" (UID: \"ff1c11bd-6835-428a-818d-4856377b6cdb\") " Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.126710 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1c11bd-6835-428a-818d-4856377b6cdb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff1c11bd-6835-428a-818d-4856377b6cdb" (UID: "ff1c11bd-6835-428a-818d-4856377b6cdb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.127020 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff1c11bd-6835-428a-818d-4856377b6cdb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.137250 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1c11bd-6835-428a-818d-4856377b6cdb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff1c11bd-6835-428a-818d-4856377b6cdb" (UID: "ff1c11bd-6835-428a-818d-4856377b6cdb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.227925 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff1c11bd-6835-428a-818d-4856377b6cdb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.247529 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-5hq9p_ffa3574d-c847-4258-b8f3-7a044a52f07b/kube-multus-additional-cni-plugins/0.log" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.247715 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.328669 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffa3574d-c847-4258-b8f3-7a044a52f07b-cni-sysctl-allowlist\") pod \"ffa3574d-c847-4258-b8f3-7a044a52f07b\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.328776 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pqg9\" (UniqueName: \"kubernetes.io/projected/ffa3574d-c847-4258-b8f3-7a044a52f07b-kube-api-access-7pqg9\") pod \"ffa3574d-c847-4258-b8f3-7a044a52f07b\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.328911 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ffa3574d-c847-4258-b8f3-7a044a52f07b-ready\") pod \"ffa3574d-c847-4258-b8f3-7a044a52f07b\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.329203 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffa3574d-c847-4258-b8f3-7a044a52f07b-tuning-conf-dir\") pod \"ffa3574d-c847-4258-b8f3-7a044a52f07b\" (UID: \"ffa3574d-c847-4258-b8f3-7a044a52f07b\") " Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.329509 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffa3574d-c847-4258-b8f3-7a044a52f07b-ready" (OuterVolumeSpecName: "ready") pod "ffa3574d-c847-4258-b8f3-7a044a52f07b" (UID: "ffa3574d-c847-4258-b8f3-7a044a52f07b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.329536 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffa3574d-c847-4258-b8f3-7a044a52f07b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "ffa3574d-c847-4258-b8f3-7a044a52f07b" (UID: "ffa3574d-c847-4258-b8f3-7a044a52f07b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.329693 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffa3574d-c847-4258-b8f3-7a044a52f07b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "ffa3574d-c847-4258-b8f3-7a044a52f07b" (UID: "ffa3574d-c847-4258-b8f3-7a044a52f07b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.431436 5113 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ffa3574d-c847-4258-b8f3-7a044a52f07b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.431574 5113 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/ffa3574d-c847-4258-b8f3-7a044a52f07b-ready\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.431591 5113 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffa3574d-c847-4258-b8f3-7a044a52f07b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.565149 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa3574d-c847-4258-b8f3-7a044a52f07b-kube-api-access-7pqg9" (OuterVolumeSpecName: "kube-api-access-7pqg9") pod "ffa3574d-c847-4258-b8f3-7a044a52f07b" (UID: "ffa3574d-c847-4258-b8f3-7a044a52f07b"). InnerVolumeSpecName "kube-api-access-7pqg9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.579869 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-5hq9p_ffa3574d-c847-4258-b8f3-7a044a52f07b/kube-multus-additional-cni-plugins/0.log" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.580093 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" event={"ID":"ffa3574d-c847-4258-b8f3-7a044a52f07b","Type":"ContainerDied","Data":"2c387b40bae698aaff13c1e452657f39a9600cfa7c0a3bb1854915bacbf0bae7"} Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.580303 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-5hq9p" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.583379 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"ff1c11bd-6835-428a-818d-4856377b6cdb","Type":"ContainerDied","Data":"d6b38522d1eb7e57bef0dd53e47a8b0decd111d68e9c0c127c0e5777fb37425a"} Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.583416 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.583429 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6b38522d1eb7e57bef0dd53e47a8b0decd111d68e9c0c127c0e5777fb37425a" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.619861 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5hq9p"] Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.622486 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-5hq9p"] Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.634535 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7pqg9\" (UniqueName: \"kubernetes.io/projected/ffa3574d-c847-4258-b8f3-7a044a52f07b-kube-api-access-7pqg9\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:38 crc kubenswrapper[5113]: I1208 17:43:38.689667 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" path="/var/lib/kubelet/pods/ffa3574d-c847-4258-b8f3-7a044a52f07b/volumes" Dec 08 17:43:39 crc kubenswrapper[5113]: I1208 17:43:39.103156 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:39 crc kubenswrapper[5113]: I1208 17:43:39.103248 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.842184 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.891474 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg"] Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893076 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ff1c11bd-6835-428a-818d-4856377b6cdb" containerName="pruner" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893098 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1c11bd-6835-428a-818d-4856377b6cdb" containerName="pruner" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893115 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893126 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893139 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893145 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893278 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="ff1c11bd-6835-428a-818d-4856377b6cdb" containerName="pruner" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893289 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="ffa3574d-c847-4258-b8f3-7a044a52f07b" containerName="kube-multus-additional-cni-plugins" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.893300 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.896660 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4fb66f3-9cc5-4140-a02d-64337bf308a6-serving-cert\") pod \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.896726 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-config\") pod \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.896840 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-proxy-ca-bundles\") pod \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.896903 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k89nr\" (UniqueName: \"kubernetes.io/projected/f4fb66f3-9cc5-4140-a02d-64337bf308a6-kube-api-access-k89nr\") pod \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.896993 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4fb66f3-9cc5-4140-a02d-64337bf308a6-tmp\") pod \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.897074 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-client-ca\") pod \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\" (UID: \"f4fb66f3-9cc5-4140-a02d-64337bf308a6\") " Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.897688 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4fb66f3-9cc5-4140-a02d-64337bf308a6-tmp" (OuterVolumeSpecName: "tmp") pod "f4fb66f3-9cc5-4140-a02d-64337bf308a6" (UID: "f4fb66f3-9cc5-4140-a02d-64337bf308a6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.898124 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-client-ca" (OuterVolumeSpecName: "client-ca") pod "f4fb66f3-9cc5-4140-a02d-64337bf308a6" (UID: "f4fb66f3-9cc5-4140-a02d-64337bf308a6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.898217 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f4fb66f3-9cc5-4140-a02d-64337bf308a6" (UID: "f4fb66f3-9cc5-4140-a02d-64337bf308a6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.898780 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-config" (OuterVolumeSpecName: "config") pod "f4fb66f3-9cc5-4140-a02d-64337bf308a6" (UID: "f4fb66f3-9cc5-4140-a02d-64337bf308a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.899359 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.919372 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fb66f3-9cc5-4140-a02d-64337bf308a6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f4fb66f3-9cc5-4140-a02d-64337bf308a6" (UID: "f4fb66f3-9cc5-4140-a02d-64337bf308a6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.926610 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fb66f3-9cc5-4140-a02d-64337bf308a6-kube-api-access-k89nr" (OuterVolumeSpecName: "kube-api-access-k89nr") pod "f4fb66f3-9cc5-4140-a02d-64337bf308a6" (UID: "f4fb66f3-9cc5-4140-a02d-64337bf308a6"). InnerVolumeSpecName "kube-api-access-k89nr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.934866 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg"] Dec 08 17:43:46 crc kubenswrapper[5113]: I1208 17:43:46.966250 5113 scope.go:117] "RemoveContainer" containerID="e683d2fa9449311f6e3b9727cb035ac6916d57c5b7987e152b9d757c0f444f6b" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.002731 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzzgf\" (UniqueName: \"kubernetes.io/projected/7997ac5e-4332-4152-b046-9cb8e04a604c-kube-api-access-qzzgf\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.002777 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-config\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003025 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-proxy-ca-bundles\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003144 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-client-ca\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003165 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7997ac5e-4332-4152-b046-9cb8e04a604c-serving-cert\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003201 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7997ac5e-4332-4152-b046-9cb8e04a604c-tmp\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003317 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4fb66f3-9cc5-4140-a02d-64337bf308a6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003331 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003342 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003357 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k89nr\" (UniqueName: \"kubernetes.io/projected/f4fb66f3-9cc5-4140-a02d-64337bf308a6-kube-api-access-k89nr\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003368 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f4fb66f3-9cc5-4140-a02d-64337bf308a6-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.003382 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4fb66f3-9cc5-4140-a02d-64337bf308a6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.010418 5113 patch_prober.go:28] interesting pod/controller-manager-d979478f4-zqjhw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.010503 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": context deadline exceeded" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.032982 5113 scope.go:117] "RemoveContainer" containerID="84835268dc35d60f0dd3000beefd4c60398c410f3162c8793e8f29d355cdc653" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.105140 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7997ac5e-4332-4152-b046-9cb8e04a604c-tmp\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.105444 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qzzgf\" (UniqueName: \"kubernetes.io/projected/7997ac5e-4332-4152-b046-9cb8e04a604c-kube-api-access-qzzgf\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.105468 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-config\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.105535 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-proxy-ca-bundles\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.105789 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-client-ca\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.105872 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7997ac5e-4332-4152-b046-9cb8e04a604c-serving-cert\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.106932 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7997ac5e-4332-4152-b046-9cb8e04a604c-tmp\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.106974 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-config\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.107495 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-proxy-ca-bundles\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.107559 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-client-ca\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.127185 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7997ac5e-4332-4152-b046-9cb8e04a604c-serving-cert\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.166778 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzzgf\" (UniqueName: \"kubernetes.io/projected/7997ac5e-4332-4152-b046-9cb8e04a604c-kube-api-access-qzzgf\") pod \"controller-manager-65dcd8c5cb-h74xg\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.249346 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.272532 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl"] Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.279486 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.366998 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 17:43:47 crc kubenswrapper[5113]: W1208 17:43:47.393605 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2bff4f8b_254e_41ed_95ea_6c7a6d137cf8.slice/crio-64345dc1fac5d4a1040bc57f8436e82b4928db31287d673c8b0e7b463770a2b8 WatchSource:0}: Error finding container 64345dc1fac5d4a1040bc57f8436e82b4928db31287d673c8b0e7b463770a2b8: Status 404 returned error can't find the container with id 64345dc1fac5d4a1040bc57f8436e82b4928db31287d673c8b0e7b463770a2b8 Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.660734 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hssz4" event={"ID":"344cbec9-b5c3-4662-96d9-d7a1eac85bb7","Type":"ContainerStarted","Data":"d6b7463f7755a28072023e7024c320212111d4546e8f2387070bf63b2a6f68f2"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.661272 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg"] Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.663875 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" event={"ID":"f4fb66f3-9cc5-4140-a02d-64337bf308a6","Type":"ContainerDied","Data":"abe143a0fe4235ac1a568ca3b13de6e1e60e5b0887d66b1d9f1662269db865fa"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.663957 5113 scope.go:117] "RemoveContainer" containerID="a831cbfb8745ddbce90f398757f712581816e26c45c93a6b778030fb4f8300d8" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.664213 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d979478f4-zqjhw" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.674050 5113 generic.go:358] "Generic (PLEG): container finished" podID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerID="46b3e48ab8cbed35b8b2c609523ef22f14ae184f100bd42aa3cd9034f7f5c259" exitCode=0 Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.674201 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qxws" event={"ID":"f9b516ac-7e4a-4d32-9f80-c8ec25504b22","Type":"ContainerDied","Data":"46b3e48ab8cbed35b8b2c609523ef22f14ae184f100bd42aa3cd9034f7f5c259"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.688768 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03","Type":"ContainerStarted","Data":"d8d22472a6f8b4b943f051e750d47b042fd9ccf4a894935b9654d2b49627b2bd"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.698409 5113 generic.go:358] "Generic (PLEG): container finished" podID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerID="25c1df993569d361b2d5dcbf6d05f876a0f69d30301c681df260e4b27c1dbfec" exitCode=0 Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.698524 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m6gs" event={"ID":"6c70de64-72e0-4f9a-a819-2c1a683e43b7","Type":"ContainerDied","Data":"25c1df993569d361b2d5dcbf6d05f876a0f69d30301c681df260e4b27c1dbfec"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.713925 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vsp" event={"ID":"d6ee077b-7234-40ba-87fc-f305ca2738e3","Type":"ContainerStarted","Data":"6b54997d97a14623a25dc5ef9cf8e654f99329e7742b4081fd1627ea8feba5f9"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.715598 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8","Type":"ContainerStarted","Data":"64345dc1fac5d4a1040bc57f8436e82b4928db31287d673c8b0e7b463770a2b8"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.726066 5113 generic.go:358] "Generic (PLEG): container finished" podID="f838eabb-c868-4308-ab80-860767b7bf4a" containerID="a75330171946fb71accab334bfcdcbdf4fc67fb7c893601cb5b739a8a7ec8d06" exitCode=0 Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.726149 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x2ww" event={"ID":"f838eabb-c868-4308-ab80-860767b7bf4a","Type":"ContainerDied","Data":"a75330171946fb71accab334bfcdcbdf4fc67fb7c893601cb5b739a8a7ec8d06"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.734875 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" event={"ID":"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5","Type":"ContainerStarted","Data":"172cb0f8414d394d8215ef8473abcbe069b0e6e095700550745901e07f475139"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.762583 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerStarted","Data":"f5edf34990f8049a6822c2fc7da73bc5dcd50f1f59b1e859494c81893b61d690"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.775434 5113 generic.go:358] "Generic (PLEG): container finished" podID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerID="2b268d5a6d6e12cf9eb4b9a23cf15033b6405ad80e1d32c1ca99aec0920eb334" exitCode=0 Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.775521 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wng52" event={"ID":"41d40883-8b52-49f7-b408-0d99251bf9f2","Type":"ContainerDied","Data":"2b268d5a6d6e12cf9eb4b9a23cf15033b6405ad80e1d32c1ca99aec0920eb334"} Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.785398 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.787444 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.787537 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.922683 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d979478f4-zqjhw"] Dec 08 17:43:47 crc kubenswrapper[5113]: I1208 17:43:47.926102 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d979478f4-zqjhw"] Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.700836 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fb66f3-9cc5-4140-a02d-64337bf308a6" path="/var/lib/kubelet/pods/f4fb66f3-9cc5-4140-a02d-64337bf308a6/volumes" Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.812778 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" event={"ID":"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5","Type":"ContainerStarted","Data":"f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd"} Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.813891 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.815164 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" event={"ID":"7997ac5e-4332-4152-b046-9cb8e04a604c","Type":"ContainerStarted","Data":"c06555231024965d020965876fb7273e212f17504d6a3da083557e508766604d"} Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.819140 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2k64" event={"ID":"8be217c9-d60b-4e20-9733-d8011aa40811","Type":"ContainerStarted","Data":"a0035579160b1c007b9a20537bf681e17ec26c3c1dee2793168456012776ae75"} Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.864508 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" podStartSLOduration=33.864478006 podStartE2EDuration="33.864478006s" podCreationTimestamp="2025-12-08 17:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:48.83651752 +0000 UTC m=+194.552310646" watchObservedRunningTime="2025-12-08 17:43:48.864478006 +0000 UTC m=+194.580271122" Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.874174 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdsj7" event={"ID":"69369de4-4a5e-4f6c-bda2-0ce227331647","Type":"ContainerStarted","Data":"d57f2852911c602315c810e821645cdbd68ed55004b4861bea4d932485a4be1d"} Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.879445 5113 generic.go:358] "Generic (PLEG): container finished" podID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerID="d6b7463f7755a28072023e7024c320212111d4546e8f2387070bf63b2a6f68f2" exitCode=0 Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.879583 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hssz4" event={"ID":"344cbec9-b5c3-4662-96d9-d7a1eac85bb7","Type":"ContainerDied","Data":"d6b7463f7755a28072023e7024c320212111d4546e8f2387070bf63b2a6f68f2"} Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.903297 5113 generic.go:358] "Generic (PLEG): container finished" podID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerID="6b54997d97a14623a25dc5ef9cf8e654f99329e7742b4081fd1627ea8feba5f9" exitCode=0 Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.904555 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vsp" event={"ID":"d6ee077b-7234-40ba-87fc-f305ca2738e3","Type":"ContainerDied","Data":"6b54997d97a14623a25dc5ef9cf8e654f99329e7742b4081fd1627ea8feba5f9"} Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.905094 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.905158 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:48 crc kubenswrapper[5113]: I1208 17:43:48.972518 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:43:49 crc kubenswrapper[5113]: I1208 17:43:49.027890 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:49 crc kubenswrapper[5113]: I1208 17:43:49.028325 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.250440 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hssz4" event={"ID":"344cbec9-b5c3-4662-96d9-d7a1eac85bb7","Type":"ContainerStarted","Data":"e042dddb9b60188e7c02d8d0016ae7cd7e6946cb020e0eef4c3f77764ea062ba"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.255991 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qxws" event={"ID":"f9b516ac-7e4a-4d32-9f80-c8ec25504b22","Type":"ContainerStarted","Data":"998014fe2e42df6bc40d63c5caacf39f51bfd9702b0f95cb22ec183ceab4662b"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.270123 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03","Type":"ContainerStarted","Data":"4af6f440f09d495131b1704521acf4ab15434ef5bb620e3054d237d3c3b9e989"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.274479 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m6gs" event={"ID":"6c70de64-72e0-4f9a-a819-2c1a683e43b7","Type":"ContainerStarted","Data":"8e16d37bfdeb160b414d02568aa162e87665e3ad3bba0d1778d82066e2f9d56c"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.277638 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vsp" event={"ID":"d6ee077b-7234-40ba-87fc-f305ca2738e3","Type":"ContainerStarted","Data":"7fd40df2a3318b992a023fff47fb9d008eb556d42ed9e85d1acf3638770ef810"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.279201 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hssz4" podStartSLOduration=13.14061306 podStartE2EDuration="58.279176819s" podCreationTimestamp="2025-12-08 17:42:52 +0000 UTC" firstStartedPulling="2025-12-08 17:43:01.575996206 +0000 UTC m=+147.291789322" lastFinishedPulling="2025-12-08 17:43:46.714559965 +0000 UTC m=+192.430353081" observedRunningTime="2025-12-08 17:43:50.275733771 +0000 UTC m=+195.991526887" watchObservedRunningTime="2025-12-08 17:43:50.279176819 +0000 UTC m=+195.994969935" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.281883 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8","Type":"ContainerStarted","Data":"3c0df65458070c6fbf1f92480182eafc40e6d8ff770661e2590dd6f1f565d7c3"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.285158 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x2ww" event={"ID":"f838eabb-c868-4308-ab80-860767b7bf4a","Type":"ContainerStarted","Data":"88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.295645 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" event={"ID":"7997ac5e-4332-4152-b046-9cb8e04a604c","Type":"ContainerStarted","Data":"2b91a03de6f1e06d48583a4b3bbea71b60e2e90f46ef38daa1e1b39ed79e6c30"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.295976 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.301535 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wng52" event={"ID":"41d40883-8b52-49f7-b408-0d99251bf9f2","Type":"ContainerStarted","Data":"4b5da41254daf255ea0b0f9c437fba7cfe48f5cc1f837b98d911da68572da09e"} Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.302342 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.302412 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.316627 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5qxws" podStartSLOduration=12.383672239 podStartE2EDuration="56.316605467s" podCreationTimestamp="2025-12-08 17:42:54 +0000 UTC" firstStartedPulling="2025-12-08 17:43:02.790545695 +0000 UTC m=+148.506338811" lastFinishedPulling="2025-12-08 17:43:46.723478923 +0000 UTC m=+192.439272039" observedRunningTime="2025-12-08 17:43:50.313327193 +0000 UTC m=+196.029120309" watchObservedRunningTime="2025-12-08 17:43:50.316605467 +0000 UTC m=+196.032398583" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.416740 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m6gs" podStartSLOduration=9.814961396 podStartE2EDuration="56.416719021s" podCreationTimestamp="2025-12-08 17:42:54 +0000 UTC" firstStartedPulling="2025-12-08 17:43:00.430183107 +0000 UTC m=+146.145976223" lastFinishedPulling="2025-12-08 17:43:47.031940732 +0000 UTC m=+192.747733848" observedRunningTime="2025-12-08 17:43:50.413902848 +0000 UTC m=+196.129695974" watchObservedRunningTime="2025-12-08 17:43:50.416719021 +0000 UTC m=+196.132512137" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.435597 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=23.435579893 podStartE2EDuration="23.435579893s" podCreationTimestamp="2025-12-08 17:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:50.435507812 +0000 UTC m=+196.151300928" watchObservedRunningTime="2025-12-08 17:43:50.435579893 +0000 UTC m=+196.151373009" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.621892 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=18.621875943 podStartE2EDuration="18.621875943s" podCreationTimestamp="2025-12-08 17:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:50.620467167 +0000 UTC m=+196.336260283" watchObservedRunningTime="2025-12-08 17:43:50.621875943 +0000 UTC m=+196.337669059" Dec 08 17:43:50 crc kubenswrapper[5113]: I1208 17:43:50.853886 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6x2ww" podStartSLOduration=10.521444145 podStartE2EDuration="58.853865404s" podCreationTimestamp="2025-12-08 17:42:52 +0000 UTC" firstStartedPulling="2025-12-08 17:42:58.382140436 +0000 UTC m=+144.097933552" lastFinishedPulling="2025-12-08 17:43:46.714561695 +0000 UTC m=+192.430354811" observedRunningTime="2025-12-08 17:43:50.851191095 +0000 UTC m=+196.566984201" watchObservedRunningTime="2025-12-08 17:43:50.853865404 +0000 UTC m=+196.569658530" Dec 08 17:43:51 crc kubenswrapper[5113]: I1208 17:43:51.170669 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" podStartSLOduration=36.170637465 podStartE2EDuration="36.170637465s" podCreationTimestamp="2025-12-08 17:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:43:51.144985528 +0000 UTC m=+196.860778664" watchObservedRunningTime="2025-12-08 17:43:51.170637465 +0000 UTC m=+196.886430581" Dec 08 17:43:51 crc kubenswrapper[5113]: I1208 17:43:51.183412 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q5vsp" podStartSLOduration=13.721038631999999 podStartE2EDuration="59.183395962s" podCreationTimestamp="2025-12-08 17:42:52 +0000 UTC" firstStartedPulling="2025-12-08 17:43:01.576817367 +0000 UTC m=+147.292610483" lastFinishedPulling="2025-12-08 17:43:47.039174697 +0000 UTC m=+192.754967813" observedRunningTime="2025-12-08 17:43:51.177117561 +0000 UTC m=+196.892910697" watchObservedRunningTime="2025-12-08 17:43:51.183395962 +0000 UTC m=+196.899189078" Dec 08 17:43:51 crc kubenswrapper[5113]: I1208 17:43:51.296684 5113 patch_prober.go:28] interesting pod/controller-manager-65dcd8c5cb-h74xg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 17:43:51 crc kubenswrapper[5113]: I1208 17:43:51.296820 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" podUID="7997ac5e-4332-4152-b046-9cb8e04a604c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 17:43:51 crc kubenswrapper[5113]: I1208 17:43:51.335475 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:43:51 crc kubenswrapper[5113]: I1208 17:43:51.366341 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wng52" podStartSLOduration=13.952366526 podStartE2EDuration="59.366320386s" podCreationTimestamp="2025-12-08 17:42:52 +0000 UTC" firstStartedPulling="2025-12-08 17:43:01.578115861 +0000 UTC m=+147.293908977" lastFinishedPulling="2025-12-08 17:43:46.992069721 +0000 UTC m=+192.707862837" observedRunningTime="2025-12-08 17:43:51.222142914 +0000 UTC m=+196.937936050" watchObservedRunningTime="2025-12-08 17:43:51.366320386 +0000 UTC m=+197.082113502" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.141734 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.143259 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.542852 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.542917 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.658405 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.658490 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.849281 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:43:53 crc kubenswrapper[5113]: I1208 17:43:53.849706 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:43:55 crc kubenswrapper[5113]: I1208 17:43:55.257624 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:43:55 crc kubenswrapper[5113]: I1208 17:43:55.258633 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:43:55 crc kubenswrapper[5113]: I1208 17:43:55.564047 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:43:55 crc kubenswrapper[5113]: I1208 17:43:55.564121 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.547441 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.547603 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.666853 5113 generic.go:358] "Generic (PLEG): container finished" podID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerID="d57f2852911c602315c810e821645cdbd68ed55004b4861bea4d932485a4be1d" exitCode=0 Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.666966 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdsj7" event={"ID":"69369de4-4a5e-4f6c-bda2-0ce227331647","Type":"ContainerDied","Data":"d57f2852911c602315c810e821645cdbd68ed55004b4861bea4d932485a4be1d"} Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.672916 5113 generic.go:358] "Generic (PLEG): container finished" podID="a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03" containerID="4af6f440f09d495131b1704521acf4ab15434ef5bb620e3054d237d3c3b9e989" exitCode=0 Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.673091 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03","Type":"ContainerDied","Data":"4af6f440f09d495131b1704521acf4ab15434ef5bb620e3054d237d3c3b9e989"} Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.689205 5113 generic.go:358] "Generic (PLEG): container finished" podID="8be217c9-d60b-4e20-9733-d8011aa40811" containerID="a0035579160b1c007b9a20537bf681e17ec26c3c1dee2793168456012776ae75" exitCode=0 Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.689323 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2k64" event={"ID":"8be217c9-d60b-4e20-9733-d8011aa40811","Type":"ContainerDied","Data":"a0035579160b1c007b9a20537bf681e17ec26c3c1dee2793168456012776ae75"} Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.719630 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:43:56 crc kubenswrapper[5113]: I1208 17:43:56.723901 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:43:57 crc kubenswrapper[5113]: I1208 17:43:57.422291 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7m6gs" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="registry-server" probeResult="failure" output=< Dec 08 17:43:57 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 08 17:43:57 crc kubenswrapper[5113]: > Dec 08 17:43:57 crc kubenswrapper[5113]: I1208 17:43:57.438440 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wng52" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="registry-server" probeResult="failure" output=< Dec 08 17:43:57 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 08 17:43:57 crc kubenswrapper[5113]: > Dec 08 17:43:57 crc kubenswrapper[5113]: I1208 17:43:57.439686 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-q5vsp" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="registry-server" probeResult="failure" output=< Dec 08 17:43:57 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 08 17:43:57 crc kubenswrapper[5113]: > Dec 08 17:43:57 crc kubenswrapper[5113]: I1208 17:43:57.513962 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hssz4" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="registry-server" probeResult="failure" output=< Dec 08 17:43:57 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 08 17:43:57 crc kubenswrapper[5113]: > Dec 08 17:43:57 crc kubenswrapper[5113]: I1208 17:43:57.734890 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qxws"] Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.192787 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.278660 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kube-api-access\") pod \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.278871 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kubelet-dir\") pod \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\" (UID: \"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03\") " Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.279072 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03" (UID: "a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.353363 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03" (UID: "a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.380610 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.380663 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.723960 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2k64" event={"ID":"8be217c9-d60b-4e20-9733-d8011aa40811","Type":"ContainerStarted","Data":"0e2d24c1911873a864450a40e7b78f6b036dd1403cfbc6f15956aa927d915bd2"} Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.734634 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdsj7" event={"ID":"69369de4-4a5e-4f6c-bda2-0ce227331647","Type":"ContainerStarted","Data":"9e771ad52c5f8303117ceb9fdc6af79fd09f71706a3d73d486aaa6dd4acaa6c2"} Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.738084 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5qxws" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="registry-server" containerID="cri-o://998014fe2e42df6bc40d63c5caacf39f51bfd9702b0f95cb22ec183ceab4662b" gracePeriod=2 Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.738424 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.738940 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03","Type":"ContainerDied","Data":"d8d22472a6f8b4b943f051e750d47b042fd9ccf4a894935b9654d2b49627b2bd"} Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.738966 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d22472a6f8b4b943f051e750d47b042fd9ccf4a894935b9654d2b49627b2bd" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.757843 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d2k64" podStartSLOduration=19.391096794 podStartE2EDuration="1m3.75782251s" podCreationTimestamp="2025-12-08 17:42:55 +0000 UTC" firstStartedPulling="2025-12-08 17:43:02.731335739 +0000 UTC m=+148.447128855" lastFinishedPulling="2025-12-08 17:43:47.098061455 +0000 UTC m=+192.813854571" observedRunningTime="2025-12-08 17:43:58.757507511 +0000 UTC m=+204.473300637" watchObservedRunningTime="2025-12-08 17:43:58.75782251 +0000 UTC m=+204.473615626" Dec 08 17:43:58 crc kubenswrapper[5113]: I1208 17:43:58.791140 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wdsj7" podStartSLOduration=19.468194428 podStartE2EDuration="1m3.791115092s" podCreationTimestamp="2025-12-08 17:42:55 +0000 UTC" firstStartedPulling="2025-12-08 17:43:02.743937082 +0000 UTC m=+148.459730198" lastFinishedPulling="2025-12-08 17:43:47.066857746 +0000 UTC m=+192.782650862" observedRunningTime="2025-12-08 17:43:58.78947698 +0000 UTC m=+204.505270096" watchObservedRunningTime="2025-12-08 17:43:58.791115092 +0000 UTC m=+204.506908208" Dec 08 17:43:59 crc kubenswrapper[5113]: I1208 17:43:59.028276 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:43:59 crc kubenswrapper[5113]: I1208 17:43:59.028360 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:44:00 crc kubenswrapper[5113]: I1208 17:44:00.303166 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:44:00 crc kubenswrapper[5113]: I1208 17:44:00.303557 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:44:00 crc kubenswrapper[5113]: I1208 17:44:00.594566 5113 ???:1] "http: TLS handshake error from 192.168.126.11:42488: no serving certificate available for the kubelet" Dec 08 17:44:00 crc kubenswrapper[5113]: I1208 17:44:00.753510 5113 generic.go:358] "Generic (PLEG): container finished" podID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerID="998014fe2e42df6bc40d63c5caacf39f51bfd9702b0f95cb22ec183ceab4662b" exitCode=0 Dec 08 17:44:00 crc kubenswrapper[5113]: I1208 17:44:00.753562 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qxws" event={"ID":"f9b516ac-7e4a-4d32-9f80-c8ec25504b22","Type":"ContainerDied","Data":"998014fe2e42df6bc40d63c5caacf39f51bfd9702b0f95cb22ec183ceab4662b"} Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.515766 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.560632 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.565732 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-catalog-content\") pod \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.565878 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-utilities\") pod \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.565985 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcxvf\" (UniqueName: \"kubernetes.io/projected/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-kube-api-access-rcxvf\") pod \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\" (UID: \"f9b516ac-7e4a-4d32-9f80-c8ec25504b22\") " Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.567841 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-utilities" (OuterVolumeSpecName: "utilities") pod "f9b516ac-7e4a-4d32-9f80-c8ec25504b22" (UID: "f9b516ac-7e4a-4d32-9f80-c8ec25504b22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.577334 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-kube-api-access-rcxvf" (OuterVolumeSpecName: "kube-api-access-rcxvf") pod "f9b516ac-7e4a-4d32-9f80-c8ec25504b22" (UID: "f9b516ac-7e4a-4d32-9f80-c8ec25504b22"). InnerVolumeSpecName "kube-api-access-rcxvf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.589244 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9b516ac-7e4a-4d32-9f80-c8ec25504b22" (UID: "f9b516ac-7e4a-4d32-9f80-c8ec25504b22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.620554 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.667891 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rcxvf\" (UniqueName: \"kubernetes.io/projected/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-kube-api-access-rcxvf\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.667934 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.667950 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b516ac-7e4a-4d32-9f80-c8ec25504b22-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.719317 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.759004 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.773413 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5qxws" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.774327 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5qxws" event={"ID":"f9b516ac-7e4a-4d32-9f80-c8ec25504b22","Type":"ContainerDied","Data":"f15f7f0da3a73d59535971aa3f0ec7a285502b15dd71c6108fcc5ec67a211d33"} Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.774403 5113 scope.go:117] "RemoveContainer" containerID="998014fe2e42df6bc40d63c5caacf39f51bfd9702b0f95cb22ec183ceab4662b" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.807008 5113 scope.go:117] "RemoveContainer" containerID="46b3e48ab8cbed35b8b2c609523ef22f14ae184f100bd42aa3cd9034f7f5c259" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.812376 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qxws"] Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.818499 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5qxws"] Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.835744 5113 scope.go:117] "RemoveContainer" containerID="cb0929f500d74ebc0e9500e91f2225076bd34ef43d387b00e2525c20bd8f02e0" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.889314 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:44:03 crc kubenswrapper[5113]: I1208 17:44:03.935306 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.029528 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" path="/var/lib/kubelet/pods/f9b516ac-7e4a-4d32-9f80-c8ec25504b22/volumes" Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.352204 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.426559 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.620114 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hssz4"] Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.620554 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hssz4" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="registry-server" containerID="cri-o://e042dddb9b60188e7c02d8d0016ae7cd7e6946cb020e0eef4c3f77764ea062ba" gracePeriod=2 Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.799229 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.799302 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:44:05 crc kubenswrapper[5113]: I1208 17:44:05.870329 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:44:06 crc kubenswrapper[5113]: I1208 17:44:06.081354 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:44:06 crc kubenswrapper[5113]: I1208 17:44:06.688417 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:44:06 crc kubenswrapper[5113]: I1208 17:44:06.688482 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:44:06 crc kubenswrapper[5113]: I1208 17:44:06.781288 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:44:06 crc kubenswrapper[5113]: I1208 17:44:06.860806 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-dvf7w"] Dec 08 17:44:07 crc kubenswrapper[5113]: I1208 17:44:07.102003 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:44:08 crc kubenswrapper[5113]: I1208 17:44:08.052762 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wng52"] Dec 08 17:44:08 crc kubenswrapper[5113]: I1208 17:44:08.053862 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wng52" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="registry-server" containerID="cri-o://4b5da41254daf255ea0b0f9c437fba7cfe48f5cc1f837b98d911da68572da09e" gracePeriod=2 Dec 08 17:44:08 crc kubenswrapper[5113]: I1208 17:44:08.063024 5113 generic.go:358] "Generic (PLEG): container finished" podID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerID="e042dddb9b60188e7c02d8d0016ae7cd7e6946cb020e0eef4c3f77764ea062ba" exitCode=0 Dec 08 17:44:08 crc kubenswrapper[5113]: I1208 17:44:08.063102 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hssz4" event={"ID":"344cbec9-b5c3-4662-96d9-d7a1eac85bb7","Type":"ContainerDied","Data":"e042dddb9b60188e7c02d8d0016ae7cd7e6946cb020e0eef4c3f77764ea062ba"} Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.028476 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.029094 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.029163 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.029920 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f5edf34990f8049a6822c2fc7da73bc5dcd50f1f59b1e859494c81893b61d690"} pod="openshift-console/downloads-747b44746d-klln7" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.029968 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" containerID="cri-o://f5edf34990f8049a6822c2fc7da73bc5dcd50f1f59b1e859494c81893b61d690" gracePeriod=2 Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.030293 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.030437 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.353263 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.396612 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-catalog-content\") pod \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.396783 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-utilities\") pod \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.396847 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-762kl\" (UniqueName: \"kubernetes.io/projected/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-kube-api-access-762kl\") pod \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\" (UID: \"344cbec9-b5c3-4662-96d9-d7a1eac85bb7\") " Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.398121 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-utilities" (OuterVolumeSpecName: "utilities") pod "344cbec9-b5c3-4662-96d9-d7a1eac85bb7" (UID: "344cbec9-b5c3-4662-96d9-d7a1eac85bb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.404155 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-kube-api-access-762kl" (OuterVolumeSpecName: "kube-api-access-762kl") pod "344cbec9-b5c3-4662-96d9-d7a1eac85bb7" (UID: "344cbec9-b5c3-4662-96d9-d7a1eac85bb7"). InnerVolumeSpecName "kube-api-access-762kl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.499661 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.499695 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-762kl\" (UniqueName: \"kubernetes.io/projected/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-kube-api-access-762kl\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.521728 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "344cbec9-b5c3-4662-96d9-d7a1eac85bb7" (UID: "344cbec9-b5c3-4662-96d9-d7a1eac85bb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:09 crc kubenswrapper[5113]: I1208 17:44:09.601480 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/344cbec9-b5c3-4662-96d9-d7a1eac85bb7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.082744 5113 generic.go:358] "Generic (PLEG): container finished" podID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerID="f5edf34990f8049a6822c2fc7da73bc5dcd50f1f59b1e859494c81893b61d690" exitCode=0 Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.082959 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerDied","Data":"f5edf34990f8049a6822c2fc7da73bc5dcd50f1f59b1e859494c81893b61d690"} Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.083027 5113 scope.go:117] "RemoveContainer" containerID="4e224fc2a91768c03e8c25bc87fd1989a0e510734bda2845b8c8af8b1e3234b6" Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.087098 5113 generic.go:358] "Generic (PLEG): container finished" podID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerID="4b5da41254daf255ea0b0f9c437fba7cfe48f5cc1f837b98d911da68572da09e" exitCode=0 Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.087183 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wng52" event={"ID":"41d40883-8b52-49f7-b408-0d99251bf9f2","Type":"ContainerDied","Data":"4b5da41254daf255ea0b0f9c437fba7cfe48f5cc1f837b98d911da68572da09e"} Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.090205 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hssz4" event={"ID":"344cbec9-b5c3-4662-96d9-d7a1eac85bb7","Type":"ContainerDied","Data":"38d4839b703d5d102887f5768badfb7da708aa2d4d8a935c886af25c3310d62d"} Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.090356 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hssz4" Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.134190 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hssz4"] Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.140102 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hssz4"] Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.549727 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdsj7"] Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.550072 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wdsj7" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="registry-server" containerID="cri-o://9e771ad52c5f8303117ceb9fdc6af79fd09f71706a3d73d486aaa6dd4acaa6c2" gracePeriod=2 Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.693128 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" path="/var/lib/kubelet/pods/344cbec9-b5c3-4662-96d9-d7a1eac85bb7/volumes" Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.846754 5113 scope.go:117] "RemoveContainer" containerID="e042dddb9b60188e7c02d8d0016ae7cd7e6946cb020e0eef4c3f77764ea062ba" Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.866585 5113 scope.go:117] "RemoveContainer" containerID="d6b7463f7755a28072023e7024c320212111d4546e8f2387070bf63b2a6f68f2" Dec 08 17:44:10 crc kubenswrapper[5113]: I1208 17:44:10.895011 5113 scope.go:117] "RemoveContainer" containerID="acce3cac6dc9108e2fff9cd61451999e485ef970ad5808734b8a513b46b0c9b6" Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.200547 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.225373 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvfwh\" (UniqueName: \"kubernetes.io/projected/41d40883-8b52-49f7-b408-0d99251bf9f2-kube-api-access-jvfwh\") pod \"41d40883-8b52-49f7-b408-0d99251bf9f2\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.225428 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-catalog-content\") pod \"41d40883-8b52-49f7-b408-0d99251bf9f2\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.225470 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-utilities\") pod \"41d40883-8b52-49f7-b408-0d99251bf9f2\" (UID: \"41d40883-8b52-49f7-b408-0d99251bf9f2\") " Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.226960 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-utilities" (OuterVolumeSpecName: "utilities") pod "41d40883-8b52-49f7-b408-0d99251bf9f2" (UID: "41d40883-8b52-49f7-b408-0d99251bf9f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.233599 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d40883-8b52-49f7-b408-0d99251bf9f2-kube-api-access-jvfwh" (OuterVolumeSpecName: "kube-api-access-jvfwh") pod "41d40883-8b52-49f7-b408-0d99251bf9f2" (UID: "41d40883-8b52-49f7-b408-0d99251bf9f2"). InnerVolumeSpecName "kube-api-access-jvfwh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.253221 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41d40883-8b52-49f7-b408-0d99251bf9f2" (UID: "41d40883-8b52-49f7-b408-0d99251bf9f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.326638 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jvfwh\" (UniqueName: \"kubernetes.io/projected/41d40883-8b52-49f7-b408-0d99251bf9f2-kube-api-access-jvfwh\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.326847 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:11 crc kubenswrapper[5113]: I1208 17:44:11.326861 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41d40883-8b52-49f7-b408-0d99251bf9f2-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.111150 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wng52" event={"ID":"41d40883-8b52-49f7-b408-0d99251bf9f2","Type":"ContainerDied","Data":"800574a5e489b4723401c3c67ad30d6d3d98fb531fbe899bfb9a39463a1cf260"} Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.111206 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wng52" Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.111221 5113 scope.go:117] "RemoveContainer" containerID="4b5da41254daf255ea0b0f9c437fba7cfe48f5cc1f837b98d911da68572da09e" Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.133499 5113 scope.go:117] "RemoveContainer" containerID="2b268d5a6d6e12cf9eb4b9a23cf15033b6405ad80e1d32c1ca99aec0920eb334" Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.156152 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wng52"] Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.156250 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wng52"] Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.184266 5113 scope.go:117] "RemoveContainer" containerID="6209f18517b12bef0df85d54b55f95739b52a54689c534fa560b9824ab5cdf57" Dec 08 17:44:12 crc kubenswrapper[5113]: I1208 17:44:12.928735 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" path="/var/lib/kubelet/pods/41d40883-8b52-49f7-b408-0d99251bf9f2/volumes" Dec 08 17:44:14 crc kubenswrapper[5113]: I1208 17:44:14.135282 5113 generic.go:358] "Generic (PLEG): container finished" podID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerID="9e771ad52c5f8303117ceb9fdc6af79fd09f71706a3d73d486aaa6dd4acaa6c2" exitCode=0 Dec 08 17:44:14 crc kubenswrapper[5113]: I1208 17:44:14.135939 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdsj7" event={"ID":"69369de4-4a5e-4f6c-bda2-0ce227331647","Type":"ContainerDied","Data":"9e771ad52c5f8303117ceb9fdc6af79fd09f71706a3d73d486aaa6dd4acaa6c2"} Dec 08 17:44:14 crc kubenswrapper[5113]: I1208 17:44:14.855402 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:44:14 crc kubenswrapper[5113]: I1208 17:44:14.995712 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-catalog-content\") pod \"69369de4-4a5e-4f6c-bda2-0ce227331647\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " Dec 08 17:44:14 crc kubenswrapper[5113]: I1208 17:44:14.996024 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk4xl\" (UniqueName: \"kubernetes.io/projected/69369de4-4a5e-4f6c-bda2-0ce227331647-kube-api-access-kk4xl\") pod \"69369de4-4a5e-4f6c-bda2-0ce227331647\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " Dec 08 17:44:14 crc kubenswrapper[5113]: I1208 17:44:14.996104 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-utilities\") pod \"69369de4-4a5e-4f6c-bda2-0ce227331647\" (UID: \"69369de4-4a5e-4f6c-bda2-0ce227331647\") " Dec 08 17:44:14 crc kubenswrapper[5113]: I1208 17:44:14.997556 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-utilities" (OuterVolumeSpecName: "utilities") pod "69369de4-4a5e-4f6c-bda2-0ce227331647" (UID: "69369de4-4a5e-4f6c-bda2-0ce227331647"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.009213 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69369de4-4a5e-4f6c-bda2-0ce227331647-kube-api-access-kk4xl" (OuterVolumeSpecName: "kube-api-access-kk4xl") pod "69369de4-4a5e-4f6c-bda2-0ce227331647" (UID: "69369de4-4a5e-4f6c-bda2-0ce227331647"). InnerVolumeSpecName "kube-api-access-kk4xl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.095938 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69369de4-4a5e-4f6c-bda2-0ce227331647" (UID: "69369de4-4a5e-4f6c-bda2-0ce227331647"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.098485 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kk4xl\" (UniqueName: \"kubernetes.io/projected/69369de4-4a5e-4f6c-bda2-0ce227331647-kube-api-access-kk4xl\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.098577 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.098620 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69369de4-4a5e-4f6c-bda2-0ce227331647-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.146707 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-klln7" event={"ID":"e5062982-84d6-4c80-8dce-4ab0e3098e96","Type":"ContainerStarted","Data":"f9b728229a048fff3d164d035a8e06de2f296981e698598f933a30370d3dd5bd"} Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.147194 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.147268 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.147309 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.149339 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdsj7" event={"ID":"69369de4-4a5e-4f6c-bda2-0ce227331647","Type":"ContainerDied","Data":"9934f6e2cf83209c2cafdc0057d2527a76af4d884982fb6f616b137ac38f92a3"} Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.149485 5113 scope.go:117] "RemoveContainer" containerID="9e771ad52c5f8303117ceb9fdc6af79fd09f71706a3d73d486aaa6dd4acaa6c2" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.149757 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdsj7" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.179618 5113 scope.go:117] "RemoveContainer" containerID="d57f2852911c602315c810e821645cdbd68ed55004b4861bea4d932485a4be1d" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.194458 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdsj7"] Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.197720 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wdsj7"] Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.208930 5113 scope.go:117] "RemoveContainer" containerID="93fba2c822fd5152c77eebc6ea5438b74e646cfe4d4c2a1aecc4fb39e90f8502" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.462840 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg"] Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.463190 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" podUID="7997ac5e-4332-4152-b046-9cb8e04a604c" containerName="controller-manager" containerID="cri-o://2b91a03de6f1e06d48583a4b3bbea71b60e2e90f46ef38daa1e1b39ed79e6c30" gracePeriod=30 Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.511975 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl"] Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.512284 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" podUID="8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" containerName="route-controller-manager" containerID="cri-o://f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd" gracePeriod=30 Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.954483 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.987650 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8"] Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988303 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988321 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988337 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988351 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988460 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988471 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988483 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988490 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988500 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988507 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988517 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988523 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988533 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988540 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988549 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988556 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988573 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988580 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="extract-utilities" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988589 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988596 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988603 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03" containerName="pruner" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988609 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03" containerName="pruner" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988621 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988626 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988633 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" containerName="route-controller-manager" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988640 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" containerName="route-controller-manager" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988649 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988655 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="extract-content" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988757 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988771 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" containerName="route-controller-manager" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988787 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="41d40883-8b52-49f7-b408-0d99251bf9f2" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988799 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="a2cc7e6f-3aaa-4ff7-a4f3-fc34bce53e03" containerName="pruner" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988809 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f9b516ac-7e4a-4d32-9f80-c8ec25504b22" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.988820 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="344cbec9-b5c3-4662-96d9-d7a1eac85bb7" containerName="registry-server" Dec 08 17:44:15 crc kubenswrapper[5113]: I1208 17:44:15.996584 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.013588 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8"] Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.121483 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-serving-cert\") pod \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.121571 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-config\") pod \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.121603 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-tmp\") pod \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.121651 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bd5t\" (UniqueName: \"kubernetes.io/projected/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-kube-api-access-2bd5t\") pod \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.121696 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-client-ca\") pod \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\" (UID: \"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.121960 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-client-ca\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.122081 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1038345-8146-4b2e-8678-368bd2c98c99-tmp\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.122114 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-config\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.122139 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72hhl\" (UniqueName: \"kubernetes.io/projected/f1038345-8146-4b2e-8678-368bd2c98c99-kube-api-access-72hhl\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.122178 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1038345-8146-4b2e-8678-368bd2c98c99-serving-cert\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.122494 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-tmp" (OuterVolumeSpecName: "tmp") pod "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" (UID: "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.123152 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-config" (OuterVolumeSpecName: "config") pod "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" (UID: "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.123172 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" (UID: "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.130875 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" (UID: "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.136280 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-kube-api-access-2bd5t" (OuterVolumeSpecName: "kube-api-access-2bd5t") pod "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" (UID: "8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5"). InnerVolumeSpecName "kube-api-access-2bd5t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.158763 5113 generic.go:358] "Generic (PLEG): container finished" podID="8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" containerID="f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd" exitCode=0 Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.158836 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" event={"ID":"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5","Type":"ContainerDied","Data":"f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd"} Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.158925 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" event={"ID":"8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5","Type":"ContainerDied","Data":"172cb0f8414d394d8215ef8473abcbe069b0e6e095700550745901e07f475139"} Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.158934 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.158956 5113 scope.go:117] "RemoveContainer" containerID="f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.167728 5113 generic.go:358] "Generic (PLEG): container finished" podID="7997ac5e-4332-4152-b046-9cb8e04a604c" containerID="2b91a03de6f1e06d48583a4b3bbea71b60e2e90f46ef38daa1e1b39ed79e6c30" exitCode=0 Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.167962 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" event={"ID":"7997ac5e-4332-4152-b046-9cb8e04a604c","Type":"ContainerDied","Data":"2b91a03de6f1e06d48583a4b3bbea71b60e2e90f46ef38daa1e1b39ed79e6c30"} Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.180181 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.180306 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.193171 5113 scope.go:117] "RemoveContainer" containerID="f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd" Dec 08 17:44:16 crc kubenswrapper[5113]: E1208 17:44:16.193768 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd\": container with ID starting with f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd not found: ID does not exist" containerID="f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.193898 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd"} err="failed to get container status \"f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd\": rpc error: code = NotFound desc = could not find container \"f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd\": container with ID starting with f8e215f405b37c1230ae74f4aa4d08ebfde30bd5d12263bb291bdc16c64403cd not found: ID does not exist" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.204005 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl"] Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.208248 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544d4d5ff5-bshnl"] Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.223410 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-client-ca\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.223795 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1038345-8146-4b2e-8678-368bd2c98c99-tmp\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.223918 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-config\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.224025 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-72hhl\" (UniqueName: \"kubernetes.io/projected/f1038345-8146-4b2e-8678-368bd2c98c99-kube-api-access-72hhl\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.224170 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1038345-8146-4b2e-8678-368bd2c98c99-serving-cert\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.224330 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.224393 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.224479 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.224574 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2bd5t\" (UniqueName: \"kubernetes.io/projected/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-kube-api-access-2bd5t\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.224663 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.225196 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1038345-8146-4b2e-8678-368bd2c98c99-tmp\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.226270 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-client-ca\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.226396 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-config\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.237443 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1038345-8146-4b2e-8678-368bd2c98c99-serving-cert\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.255155 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-72hhl\" (UniqueName: \"kubernetes.io/projected/f1038345-8146-4b2e-8678-368bd2c98c99-kube-api-access-72hhl\") pod \"route-controller-manager-59b65b8d76-jqdj8\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.334613 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.688811 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69369de4-4a5e-4f6c-bda2-0ce227331647" path="/var/lib/kubelet/pods/69369de4-4a5e-4f6c-bda2-0ce227331647/volumes" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.690058 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5" path="/var/lib/kubelet/pods/8ceaf6c1-9b09-4fc1-83cf-e5f522e4a9f5/volumes" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.704970 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.735012 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6764f7dfcf-htj8k"] Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.735900 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7997ac5e-4332-4152-b046-9cb8e04a604c" containerName="controller-manager" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.735987 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7997ac5e-4332-4152-b046-9cb8e04a604c" containerName="controller-manager" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.736184 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="7997ac5e-4332-4152-b046-9cb8e04a604c" containerName="controller-manager" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.746200 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.759400 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6764f7dfcf-htj8k"] Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.813382 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8"] Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.834717 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-config\") pod \"7997ac5e-4332-4152-b046-9cb8e04a604c\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.834825 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-proxy-ca-bundles\") pod \"7997ac5e-4332-4152-b046-9cb8e04a604c\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.834858 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-client-ca\") pod \"7997ac5e-4332-4152-b046-9cb8e04a604c\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.834884 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7997ac5e-4332-4152-b046-9cb8e04a604c-serving-cert\") pod \"7997ac5e-4332-4152-b046-9cb8e04a604c\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.834968 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzzgf\" (UniqueName: \"kubernetes.io/projected/7997ac5e-4332-4152-b046-9cb8e04a604c-kube-api-access-qzzgf\") pod \"7997ac5e-4332-4152-b046-9cb8e04a604c\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835004 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7997ac5e-4332-4152-b046-9cb8e04a604c-tmp\") pod \"7997ac5e-4332-4152-b046-9cb8e04a604c\" (UID: \"7997ac5e-4332-4152-b046-9cb8e04a604c\") " Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835171 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-client-ca\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835200 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpzc7\" (UniqueName: \"kubernetes.io/projected/73c78ecf-976f-4d33-aae1-9d98192f8131-kube-api-access-cpzc7\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835244 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/73c78ecf-976f-4d33-aae1-9d98192f8131-tmp\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835277 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-proxy-ca-bundles\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835322 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c78ecf-976f-4d33-aae1-9d98192f8131-serving-cert\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835362 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-config\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.835972 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7997ac5e-4332-4152-b046-9cb8e04a604c-tmp" (OuterVolumeSpecName: "tmp") pod "7997ac5e-4332-4152-b046-9cb8e04a604c" (UID: "7997ac5e-4332-4152-b046-9cb8e04a604c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.836374 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-client-ca" (OuterVolumeSpecName: "client-ca") pod "7997ac5e-4332-4152-b046-9cb8e04a604c" (UID: "7997ac5e-4332-4152-b046-9cb8e04a604c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.836538 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7997ac5e-4332-4152-b046-9cb8e04a604c" (UID: "7997ac5e-4332-4152-b046-9cb8e04a604c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.837150 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-config" (OuterVolumeSpecName: "config") pod "7997ac5e-4332-4152-b046-9cb8e04a604c" (UID: "7997ac5e-4332-4152-b046-9cb8e04a604c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.843909 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7997ac5e-4332-4152-b046-9cb8e04a604c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7997ac5e-4332-4152-b046-9cb8e04a604c" (UID: "7997ac5e-4332-4152-b046-9cb8e04a604c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.846771 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7997ac5e-4332-4152-b046-9cb8e04a604c-kube-api-access-qzzgf" (OuterVolumeSpecName: "kube-api-access-qzzgf") pod "7997ac5e-4332-4152-b046-9cb8e04a604c" (UID: "7997ac5e-4332-4152-b046-9cb8e04a604c"). InnerVolumeSpecName "kube-api-access-qzzgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.936988 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/73c78ecf-976f-4d33-aae1-9d98192f8131-tmp\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937526 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-proxy-ca-bundles\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937583 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c78ecf-976f-4d33-aae1-9d98192f8131-serving-cert\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937625 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-config\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937677 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-client-ca\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937696 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cpzc7\" (UniqueName: \"kubernetes.io/projected/73c78ecf-976f-4d33-aae1-9d98192f8131-kube-api-access-cpzc7\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937749 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937762 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937774 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7997ac5e-4332-4152-b046-9cb8e04a604c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937784 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7997ac5e-4332-4152-b046-9cb8e04a604c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937795 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzzgf\" (UniqueName: \"kubernetes.io/projected/7997ac5e-4332-4152-b046-9cb8e04a604c-kube-api-access-qzzgf\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937807 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7997ac5e-4332-4152-b046-9cb8e04a604c-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.937837 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/73c78ecf-976f-4d33-aae1-9d98192f8131-tmp\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.939839 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-client-ca\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.940621 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-proxy-ca-bundles\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.940898 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-config\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.944583 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c78ecf-976f-4d33-aae1-9d98192f8131-serving-cert\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:16 crc kubenswrapper[5113]: I1208 17:44:16.957434 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpzc7\" (UniqueName: \"kubernetes.io/projected/73c78ecf-976f-4d33-aae1-9d98192f8131-kube-api-access-cpzc7\") pod \"controller-manager-6764f7dfcf-htj8k\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.067867 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.195894 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" event={"ID":"f1038345-8146-4b2e-8678-368bd2c98c99","Type":"ContainerStarted","Data":"c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb"} Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.196331 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" event={"ID":"f1038345-8146-4b2e-8678-368bd2c98c99","Type":"ContainerStarted","Data":"ba5e2fc917da9788994dc2fb4fb9cd437a35657c1722d1e2bad08cd61d504309"} Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.196353 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.200605 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" event={"ID":"7997ac5e-4332-4152-b046-9cb8e04a604c","Type":"ContainerDied","Data":"c06555231024965d020965876fb7273e212f17504d6a3da083557e508766604d"} Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.200682 5113 scope.go:117] "RemoveContainer" containerID="2b91a03de6f1e06d48583a4b3bbea71b60e2e90f46ef38daa1e1b39ed79e6c30" Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.200626 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg" Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.222426 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" podStartSLOduration=2.222406197 podStartE2EDuration="2.222406197s" podCreationTimestamp="2025-12-08 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:17.214683949 +0000 UTC m=+222.930477065" watchObservedRunningTime="2025-12-08 17:44:17.222406197 +0000 UTC m=+222.938199313" Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.242493 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg"] Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.247784 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65dcd8c5cb-h74xg"] Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.301895 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6764f7dfcf-htj8k"] Dec 08 17:44:17 crc kubenswrapper[5113]: W1208 17:44:17.307984 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73c78ecf_976f_4d33_aae1_9d98192f8131.slice/crio-9f80a5d8f748613460cfb862d332bfceaf77a18c6edb5ea6961729298acdfdf9 WatchSource:0}: Error finding container 9f80a5d8f748613460cfb862d332bfceaf77a18c6edb5ea6961729298acdfdf9: Status 404 returned error can't find the container with id 9f80a5d8f748613460cfb862d332bfceaf77a18c6edb5ea6961729298acdfdf9 Dec 08 17:44:17 crc kubenswrapper[5113]: I1208 17:44:17.844367 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:18 crc kubenswrapper[5113]: I1208 17:44:18.213625 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" event={"ID":"73c78ecf-976f-4d33-aae1-9d98192f8131","Type":"ContainerStarted","Data":"ef051369f022bb131f42b46b6540912c46ddff7a8355f50906c3707b0897f664"} Dec 08 17:44:18 crc kubenswrapper[5113]: I1208 17:44:18.213708 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" event={"ID":"73c78ecf-976f-4d33-aae1-9d98192f8131","Type":"ContainerStarted","Data":"9f80a5d8f748613460cfb862d332bfceaf77a18c6edb5ea6961729298acdfdf9"} Dec 08 17:44:18 crc kubenswrapper[5113]: I1208 17:44:18.214054 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:18 crc kubenswrapper[5113]: I1208 17:44:18.224145 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:18 crc kubenswrapper[5113]: I1208 17:44:18.238141 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" podStartSLOduration=3.238114654 podStartE2EDuration="3.238114654s" podCreationTimestamp="2025-12-08 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:44:18.235284682 +0000 UTC m=+223.951077808" watchObservedRunningTime="2025-12-08 17:44:18.238114654 +0000 UTC m=+223.953907780" Dec 08 17:44:18 crc kubenswrapper[5113]: I1208 17:44:18.696176 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7997ac5e-4332-4152-b046-9cb8e04a604c" path="/var/lib/kubelet/pods/7997ac5e-4332-4152-b046-9cb8e04a604c/volumes" Dec 08 17:44:19 crc kubenswrapper[5113]: I1208 17:44:19.027706 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-klln7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 08 17:44:19 crc kubenswrapper[5113]: I1208 17:44:19.027832 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-klln7" podUID="e5062982-84d6-4c80-8dce-4ab0e3098e96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 08 17:44:23 crc kubenswrapper[5113]: I1208 17:44:23.256162 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:44:23 crc kubenswrapper[5113]: I1208 17:44:23.256941 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:44:26 crc kubenswrapper[5113]: I1208 17:44:26.184544 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-klln7" Dec 08 17:44:30 crc kubenswrapper[5113]: I1208 17:44:30.642046 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:44:31 crc kubenswrapper[5113]: I1208 17:44:31.902868 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" containerName="oauth-openshift" containerID="cri-o://c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb" gracePeriod=15 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.479921 5113 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.480311 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.483547 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c" gracePeriod=15 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.483851 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a" gracePeriod=15 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.483916 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad" gracePeriod=15 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.484004 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf" gracePeriod=15 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.484113 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254" gracePeriod=15 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.494676 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495381 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495399 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495419 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495427 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495446 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495453 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495461 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495467 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495476 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495484 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495500 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495506 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495515 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495522 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495534 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495543 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495562 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495570 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495698 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495711 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495723 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495730 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495740 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495748 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495762 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495778 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495897 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.495906 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.496054 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.573154 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.573209 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.573268 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.573307 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.573344 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.580370 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6764f7dfcf-htj8k"] Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.580441 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8"] Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.580793 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" containerName="route-controller-manager" containerID="cri-o://c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb" gracePeriod=30 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.581106 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" containerName="controller-manager" containerID="cri-o://ef051369f022bb131f42b46b6540912c46ddff7a8355f50906c3707b0897f664" gracePeriod=30 Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.582146 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:38 crc kubenswrapper[5113]: E1208 17:44:38.583232 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-59b65b8d76-jqdj8.187f4e7d32af4bcb openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-59b65b8d76-jqdj8,UID:f1038345-8146-4b2e-8678-368bd2c98c99,APIVersion:v1,ResourceVersion:39252,FieldPath:spec.containers{route-controller-manager},},Reason:Killing,Message:Stopping container route-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:44:38.580751307 +0000 UTC m=+244.296544423,LastTimestamp:2025-12-08 17:44:38.580751307 +0000 UTC m=+244.296544423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.584028 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.584641 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.584959 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.641332 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: E1208 17:44:38.641964 5113 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674413 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674501 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674541 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674564 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674650 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674705 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674746 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674788 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674930 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.674981 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.675120 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.675189 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.675216 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.675248 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.675286 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.775796 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.776011 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.776118 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.776175 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.776346 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.776443 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.776570 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.776732 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.777877 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.777991 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: I1208 17:44:38.943888 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:38 crc kubenswrapper[5113]: W1208 17:44:38.992560 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-f1ec6b2a43e8b05b14d871f0472930b0e6fd2b072ee565e95c6a69c6feaef54e WatchSource:0}: Error finding container f1ec6b2a43e8b05b14d871f0472930b0e6fd2b072ee565e95c6a69c6feaef54e: Status 404 returned error can't find the container with id f1ec6b2a43e8b05b14d871f0472930b0e6fd2b072ee565e95c6a69c6feaef54e Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.127320 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.128898 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.129282 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.282972 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72hhl\" (UniqueName: \"kubernetes.io/projected/f1038345-8146-4b2e-8678-368bd2c98c99-kube-api-access-72hhl\") pod \"f1038345-8146-4b2e-8678-368bd2c98c99\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.283532 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-config\") pod \"f1038345-8146-4b2e-8678-368bd2c98c99\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.283633 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-client-ca\") pod \"f1038345-8146-4b2e-8678-368bd2c98c99\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.283720 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1038345-8146-4b2e-8678-368bd2c98c99-tmp\") pod \"f1038345-8146-4b2e-8678-368bd2c98c99\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.283773 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1038345-8146-4b2e-8678-368bd2c98c99-serving-cert\") pod \"f1038345-8146-4b2e-8678-368bd2c98c99\" (UID: \"f1038345-8146-4b2e-8678-368bd2c98c99\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.285543 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-client-ca" (OuterVolumeSpecName: "client-ca") pod "f1038345-8146-4b2e-8678-368bd2c98c99" (UID: "f1038345-8146-4b2e-8678-368bd2c98c99"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.285713 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1038345-8146-4b2e-8678-368bd2c98c99-tmp" (OuterVolumeSpecName: "tmp") pod "f1038345-8146-4b2e-8678-368bd2c98c99" (UID: "f1038345-8146-4b2e-8678-368bd2c98c99"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.285947 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-config" (OuterVolumeSpecName: "config") pod "f1038345-8146-4b2e-8678-368bd2c98c99" (UID: "f1038345-8146-4b2e-8678-368bd2c98c99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.290915 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1038345-8146-4b2e-8678-368bd2c98c99-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f1038345-8146-4b2e-8678-368bd2c98c99" (UID: "f1038345-8146-4b2e-8678-368bd2c98c99"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.291980 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1038345-8146-4b2e-8678-368bd2c98c99-kube-api-access-72hhl" (OuterVolumeSpecName: "kube-api-access-72hhl") pod "f1038345-8146-4b2e-8678-368bd2c98c99" (UID: "f1038345-8146-4b2e-8678-368bd2c98c99"). InnerVolumeSpecName "kube-api-access-72hhl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.363457 5113 generic.go:358] "Generic (PLEG): container finished" podID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" containerID="3c0df65458070c6fbf1f92480182eafc40e6d8ff770661e2590dd6f1f565d7c3" exitCode=0 Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.363588 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8","Type":"ContainerDied","Data":"3c0df65458070c6fbf1f92480182eafc40e6d8ff770661e2590dd6f1f565d7c3"} Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.364589 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.365432 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.365933 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.382666 5113 generic.go:358] "Generic (PLEG): container finished" podID="f1038345-8146-4b2e-8678-368bd2c98c99" containerID="c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb" exitCode=0 Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.382914 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" event={"ID":"f1038345-8146-4b2e-8678-368bd2c98c99","Type":"ContainerDied","Data":"c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb"} Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.383029 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" event={"ID":"f1038345-8146-4b2e-8678-368bd2c98c99","Type":"ContainerDied","Data":"ba5e2fc917da9788994dc2fb4fb9cd437a35657c1722d1e2bad08cd61d504309"} Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.383148 5113 scope.go:117] "RemoveContainer" containerID="c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.383470 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.385274 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386086 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386137 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1038345-8146-4b2e-8678-368bd2c98c99-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386151 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1038345-8146-4b2e-8678-368bd2c98c99-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386165 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-72hhl\" (UniqueName: \"kubernetes.io/projected/f1038345-8146-4b2e-8678-368bd2c98c99-kube-api-access-72hhl\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386181 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1038345-8146-4b2e-8678-368bd2c98c99-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386195 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386425 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386644 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.386823 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.387050 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.387224 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.392297 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.393055 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.393733 5113 generic.go:358] "Generic (PLEG): container finished" podID="73c78ecf-976f-4d33-aae1-9d98192f8131" containerID="ef051369f022bb131f42b46b6540912c46ddff7a8355f50906c3707b0897f664" exitCode=0 Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.393862 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" event={"ID":"73c78ecf-976f-4d33-aae1-9d98192f8131","Type":"ContainerDied","Data":"ef051369f022bb131f42b46b6540912c46ddff7a8355f50906c3707b0897f664"} Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.410295 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"f1ec6b2a43e8b05b14d871f0472930b0e6fd2b072ee565e95c6a69c6feaef54e"} Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.413301 5113 generic.go:358] "Generic (PLEG): container finished" podID="c96a8ac1-0465-4a81-88bf-472026300c81" containerID="c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb" exitCode=0 Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.413451 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" event={"ID":"c96a8ac1-0465-4a81-88bf-472026300c81","Type":"ContainerDied","Data":"c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb"} Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.413504 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" event={"ID":"c96a8ac1-0465-4a81-88bf-472026300c81","Type":"ContainerDied","Data":"9cf1f78717f861554ab92d2388b7a87e5af65cd24d7a7ca8b96ec055d649ee96"} Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.413634 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.414504 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.414967 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.415956 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.416115 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.419366 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.419972 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.420258 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.420926 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.421404 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.421568 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.423580 5113 scope.go:117] "RemoveContainer" containerID="c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.424747 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c" exitCode=0 Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.424771 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad" exitCode=0 Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.424780 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a" exitCode=0 Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.424789 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf" exitCode=2 Dec 08 17:44:39 crc kubenswrapper[5113]: E1208 17:44:39.424879 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb\": container with ID starting with c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb not found: ID does not exist" containerID="c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.424907 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb"} err="failed to get container status \"c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb\": rpc error: code = NotFound desc = could not find container \"c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb\": container with ID starting with c971b8c1df64c785a4a362fba7f03e0ab19b44a81aa650821b14d7b8c10633cb not found: ID does not exist" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.424928 5113 scope.go:117] "RemoveContainer" containerID="c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.445271 5113 scope.go:117] "RemoveContainer" containerID="c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb" Dec 08 17:44:39 crc kubenswrapper[5113]: E1208 17:44:39.445856 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb\": container with ID starting with c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb not found: ID does not exist" containerID="c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.445914 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb"} err="failed to get container status \"c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb\": rpc error: code = NotFound desc = could not find container \"c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb\": container with ID starting with c78d845100f989eabc06b0db493ba74416ad6bb165dd6bf214da0cd680e33dbb not found: ID does not exist" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.445952 5113 scope.go:117] "RemoveContainer" containerID="fc520fd95dc81a3e65efea896eaecaf5e7f328b90c8790f69a353cc51335243f" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487586 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-error\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487689 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-ocp-branding-template\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487719 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-router-certs\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487744 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z29hm\" (UniqueName: \"kubernetes.io/projected/c96a8ac1-0465-4a81-88bf-472026300c81-kube-api-access-z29hm\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487782 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-serving-cert\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487830 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-session\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487889 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96a8ac1-0465-4a81-88bf-472026300c81-audit-dir\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487919 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-service-ca\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.487938 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-provider-selection\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.488101 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-audit-policies\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.488147 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-trusted-ca-bundle\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.488191 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-cliconfig\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.488241 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-login\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.488293 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-idp-0-file-data\") pod \"c96a8ac1-0465-4a81-88bf-472026300c81\" (UID: \"c96a8ac1-0465-4a81-88bf-472026300c81\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.488703 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c96a8ac1-0465-4a81-88bf-472026300c81-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.490064 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.490097 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.490596 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.490613 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.497138 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.497451 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96a8ac1-0465-4a81-88bf-472026300c81-kube-api-access-z29hm" (OuterVolumeSpecName: "kube-api-access-z29hm") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "kube-api-access-z29hm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.498809 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.498798 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.502752 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.503378 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.507202 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.508239 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.511525 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c96a8ac1-0465-4a81-88bf-472026300c81" (UID: "c96a8ac1-0465-4a81-88bf-472026300c81"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.590606 5113 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c96a8ac1-0465-4a81-88bf-472026300c81-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591121 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591141 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591156 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591171 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591194 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591208 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591221 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591237 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591251 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591265 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591282 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z29hm\" (UniqueName: \"kubernetes.io/projected/c96a8ac1-0465-4a81-88bf-472026300c81-kube-api-access-z29hm\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591296 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.591311 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c96a8ac1-0465-4a81-88bf-472026300c81-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.699574 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.700447 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.701153 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.701825 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.702260 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.731375 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.731893 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.732329 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.732727 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.896382 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpzc7\" (UniqueName: \"kubernetes.io/projected/73c78ecf-976f-4d33-aae1-9d98192f8131-kube-api-access-cpzc7\") pod \"73c78ecf-976f-4d33-aae1-9d98192f8131\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.896463 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c78ecf-976f-4d33-aae1-9d98192f8131-serving-cert\") pod \"73c78ecf-976f-4d33-aae1-9d98192f8131\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.896545 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/73c78ecf-976f-4d33-aae1-9d98192f8131-tmp\") pod \"73c78ecf-976f-4d33-aae1-9d98192f8131\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.896578 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-proxy-ca-bundles\") pod \"73c78ecf-976f-4d33-aae1-9d98192f8131\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.896635 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-config\") pod \"73c78ecf-976f-4d33-aae1-9d98192f8131\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.896666 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-client-ca\") pod \"73c78ecf-976f-4d33-aae1-9d98192f8131\" (UID: \"73c78ecf-976f-4d33-aae1-9d98192f8131\") " Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.897409 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73c78ecf-976f-4d33-aae1-9d98192f8131-tmp" (OuterVolumeSpecName: "tmp") pod "73c78ecf-976f-4d33-aae1-9d98192f8131" (UID: "73c78ecf-976f-4d33-aae1-9d98192f8131"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.897636 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "73c78ecf-976f-4d33-aae1-9d98192f8131" (UID: "73c78ecf-976f-4d33-aae1-9d98192f8131"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.897782 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-client-ca" (OuterVolumeSpecName: "client-ca") pod "73c78ecf-976f-4d33-aae1-9d98192f8131" (UID: "73c78ecf-976f-4d33-aae1-9d98192f8131"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.897803 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-config" (OuterVolumeSpecName: "config") pod "73c78ecf-976f-4d33-aae1-9d98192f8131" (UID: "73c78ecf-976f-4d33-aae1-9d98192f8131"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.902733 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c78ecf-976f-4d33-aae1-9d98192f8131-kube-api-access-cpzc7" (OuterVolumeSpecName: "kube-api-access-cpzc7") pod "73c78ecf-976f-4d33-aae1-9d98192f8131" (UID: "73c78ecf-976f-4d33-aae1-9d98192f8131"). InnerVolumeSpecName "kube-api-access-cpzc7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.905941 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c78ecf-976f-4d33-aae1-9d98192f8131-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "73c78ecf-976f-4d33-aae1-9d98192f8131" (UID: "73c78ecf-976f-4d33-aae1-9d98192f8131"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.997828 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cpzc7\" (UniqueName: \"kubernetes.io/projected/73c78ecf-976f-4d33-aae1-9d98192f8131-kube-api-access-cpzc7\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.998301 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c78ecf-976f-4d33-aae1-9d98192f8131-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.998392 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/73c78ecf-976f-4d33-aae1-9d98192f8131-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.998462 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.998525 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:39 crc kubenswrapper[5113]: I1208 17:44:39.998588 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73c78ecf-976f-4d33-aae1-9d98192f8131-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.437892 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.441924 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.441917 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" event={"ID":"73c78ecf-976f-4d33-aae1-9d98192f8131","Type":"ContainerDied","Data":"9f80a5d8f748613460cfb862d332bfceaf77a18c6edb5ea6961729298acdfdf9"} Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.442202 5113 scope.go:117] "RemoveContainer" containerID="ef051369f022bb131f42b46b6540912c46ddff7a8355f50906c3707b0897f664" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.442975 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.443488 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.443991 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.444279 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4"} Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.444388 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.444490 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.444878 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.446560 5113 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.446562 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.447045 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.447406 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.460478 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.460994 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.461317 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.461598 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.524839 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.526218 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.526679 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.527099 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.527618 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.527656 5113 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.527910 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Dec 08 17:44:40 crc kubenswrapper[5113]: E1208 17:44:40.728874 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.758474 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.759112 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.759571 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.760105 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.760384 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.808958 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-var-lock\") pod \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.809125 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kube-api-access\") pod \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.809178 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kubelet-dir\") pod \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\" (UID: \"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.809300 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-var-lock" (OuterVolumeSpecName: "var-lock") pod "2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" (UID: "2bff4f8b-254e-41ed-95ea-6c7a6d137cf8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.809393 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" (UID: "2bff4f8b-254e-41ed-95ea-6c7a6d137cf8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.817616 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" (UID: "2bff4f8b-254e-41ed-95ea-6c7a6d137cf8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.880096 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.880849 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.881610 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.882394 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.883150 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.883539 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.884019 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.909660 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.909708 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.909748 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.909782 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.909841 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.910134 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.910160 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.910172 5113 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2bff4f8b-254e-41ed-95ea-6c7a6d137cf8-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.910215 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.910328 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.911106 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.911370 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:40 crc kubenswrapper[5113]: I1208 17:44:40.914790 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.011214 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.011295 5113 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.011318 5113 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.011344 5113 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.011366 5113 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.130737 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.459465 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.464335 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254" exitCode=0 Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.464571 5113 scope.go:117] "RemoveContainer" containerID="18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.464682 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.470747 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"2bff4f8b-254e-41ed-95ea-6c7a6d137cf8","Type":"ContainerDied","Data":"64345dc1fac5d4a1040bc57f8436e82b4928db31287d673c8b0e7b463770a2b8"} Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.470813 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64345dc1fac5d4a1040bc57f8436e82b4928db31287d673c8b0e7b463770a2b8" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.470858 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.474150 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.475426 5113 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.522730 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.522810 5113 scope.go:117] "RemoveContainer" containerID="4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.523087 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.523524 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.524138 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.524491 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.524857 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.525171 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.526052 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.526404 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.526771 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.544534 5113 scope.go:117] "RemoveContainer" containerID="7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.562857 5113 scope.go:117] "RemoveContainer" containerID="89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.580084 5113 scope.go:117] "RemoveContainer" containerID="6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.603269 5113 scope.go:117] "RemoveContainer" containerID="8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.673591 5113 scope.go:117] "RemoveContainer" containerID="18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.674136 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c\": container with ID starting with 18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c not found: ID does not exist" containerID="18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.674517 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c"} err="failed to get container status \"18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c\": rpc error: code = NotFound desc = could not find container \"18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c\": container with ID starting with 18397d3db4e497163e25b1962aca9e0dc2379afdbc3d278b0b0e57a22f27aa6c not found: ID does not exist" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.674620 5113 scope.go:117] "RemoveContainer" containerID="4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.676376 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad\": container with ID starting with 4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad not found: ID does not exist" containerID="4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.676459 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad"} err="failed to get container status \"4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad\": rpc error: code = NotFound desc = could not find container \"4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad\": container with ID starting with 4f88ee14833b4edd7a2efc3934af66dfb0638ba27c9b46cbc4b29875af02a2ad not found: ID does not exist" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.676503 5113 scope.go:117] "RemoveContainer" containerID="7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.677603 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a\": container with ID starting with 7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a not found: ID does not exist" containerID="7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.677652 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a"} err="failed to get container status \"7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a\": rpc error: code = NotFound desc = could not find container \"7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a\": container with ID starting with 7a4030e1cf3b2befb9bfe2f87edc66e01d49254c600f508ea30e80d02f9cb14a not found: ID does not exist" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.677684 5113 scope.go:117] "RemoveContainer" containerID="89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.678221 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf\": container with ID starting with 89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf not found: ID does not exist" containerID="89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.678261 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf"} err="failed to get container status \"89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf\": rpc error: code = NotFound desc = could not find container \"89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf\": container with ID starting with 89c89abdf867b8e3ef88e114487035058fb21b421cae9d225ba78ea29e3fadcf not found: ID does not exist" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.678287 5113 scope.go:117] "RemoveContainer" containerID="6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.678649 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254\": container with ID starting with 6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254 not found: ID does not exist" containerID="6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.678703 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254"} err="failed to get container status \"6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254\": rpc error: code = NotFound desc = could not find container \"6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254\": container with ID starting with 6f294f4402f874d84080dfdcd3a18133caae8e3fa02a8ba524b815de5e216254 not found: ID does not exist" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.678732 5113 scope.go:117] "RemoveContainer" containerID="8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.679331 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\": container with ID starting with 8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2 not found: ID does not exist" containerID="8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2" Dec 08 17:44:41 crc kubenswrapper[5113]: I1208 17:44:41.679439 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2"} err="failed to get container status \"8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\": rpc error: code = NotFound desc = could not find container \"8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2\": container with ID starting with 8014fb793ed293fbfe394957a05b4f85ac1606f60d7ad2db20f830b5e34ccab2 not found: ID does not exist" Dec 08 17:44:41 crc kubenswrapper[5113]: E1208 17:44:41.932179 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Dec 08 17:44:42 crc kubenswrapper[5113]: I1208 17:44:42.690619 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 17:44:43 crc kubenswrapper[5113]: E1208 17:44:43.533441 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Dec 08 17:44:44 crc kubenswrapper[5113]: I1208 17:44:44.685610 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:44 crc kubenswrapper[5113]: I1208 17:44:44.686150 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:44 crc kubenswrapper[5113]: I1208 17:44:44.686617 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:44 crc kubenswrapper[5113]: I1208 17:44:44.687212 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:45 crc kubenswrapper[5113]: E1208 17:44:45.106707 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-59b65b8d76-jqdj8.187f4e7d32af4bcb openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-59b65b8d76-jqdj8,UID:f1038345-8146-4b2e-8678-368bd2c98c99,APIVersion:v1,ResourceVersion:39252,FieldPath:spec.containers{route-controller-manager},},Reason:Killing,Message:Stopping container route-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 17:44:38.580751307 +0000 UTC m=+244.296544423,LastTimestamp:2025-12-08 17:44:38.580751307 +0000 UTC m=+244.296544423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 17:44:46 crc kubenswrapper[5113]: E1208 17:44:46.735566 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.679895 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.682373 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.682967 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.683369 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.683598 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.701249 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.701289 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:44:50 crc kubenswrapper[5113]: E1208 17:44:50.701823 5113 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:50 crc kubenswrapper[5113]: I1208 17:44:50.702238 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:50 crc kubenswrapper[5113]: W1208 17:44:50.728213 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-a40dbc202d1305b83387807f47ac62aacb9aafe66258ad594ff453d53d7691b8 WatchSource:0}: Error finding container a40dbc202d1305b83387807f47ac62aacb9aafe66258ad594ff453d53d7691b8: Status 404 returned error can't find the container with id a40dbc202d1305b83387807f47ac62aacb9aafe66258ad594ff453d53d7691b8 Dec 08 17:44:51 crc kubenswrapper[5113]: I1208 17:44:51.547729 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"a40dbc202d1305b83387807f47ac62aacb9aafe66258ad594ff453d53d7691b8"} Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.555250 5113 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="0e4ff52c7f35ee07bea602f847c5ddcf018f4d57067384ce574de0ee5424d84e" exitCode=0 Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.555361 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"0e4ff52c7f35ee07bea602f847c5ddcf018f4d57067384ce574de0ee5424d84e"} Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.555544 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.555806 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.556275 5113 status_manager.go:895] "Failed to get status for pod" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" pod="openshift-authentication/oauth-openshift-66458b6674-dvf7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-dvf7w\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:52 crc kubenswrapper[5113]: E1208 17:44:52.556273 5113 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.556569 5113 status_manager.go:895] "Failed to get status for pod" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" pod="openshift-controller-manager/controller-manager-6764f7dfcf-htj8k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6764f7dfcf-htj8k\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.557694 5113 status_manager.go:895] "Failed to get status for pod" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" pod="openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-59b65b8d76-jqdj8\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:52 crc kubenswrapper[5113]: I1208 17:44:52.558084 5113 status_manager.go:895] "Failed to get status for pod" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.256397 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.256938 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.564073 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f67d55d7175484599ef8cfc62470b46e22c989b5fe4b182afdfdc281cf697f3f"} Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.564613 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b3491f57d3cbc1aef6557b7a845b0970bc816f49bd7ce519373764bc55fe2957"} Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.567273 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.567337 5113 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be" exitCode=1 Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.567503 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be"} Dec 08 17:44:53 crc kubenswrapper[5113]: I1208 17:44:53.595731 5113 scope.go:117] "RemoveContainer" containerID="a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be" Dec 08 17:44:54 crc kubenswrapper[5113]: I1208 17:44:54.576426 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"714906ca7f07be5c6454c7692fc8166c9e5c5fc099962abd7ba6d352e20c9c31"} Dec 08 17:44:54 crc kubenswrapper[5113]: I1208 17:44:54.576483 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4239358e8efeb2cb5b701d82bac1a56fc83a762c359412050f457ebe7f44774f"} Dec 08 17:44:54 crc kubenswrapper[5113]: I1208 17:44:54.578720 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:44:54 crc kubenswrapper[5113]: I1208 17:44:54.578863 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"dc8014cd8de21c6ff60f91f04ba6218a7c7a5b7a18e523f7159d9e12e5d8edeb"} Dec 08 17:44:54 crc kubenswrapper[5113]: I1208 17:44:54.798671 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.587075 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0f1526a38867be4b68915814d9ea23695b895a9c93eccd56016310481fb59e39"} Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.587337 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.587649 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.587594 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.702453 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.702538 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.708437 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]log ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]etcd ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-apiextensions-informers ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/crd-informer-synced ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 17:44:55 crc kubenswrapper[5113]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 17:44:55 crc kubenswrapper[5113]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/bootstrap-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/apiservice-registration-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]autoregister-completion ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 17:44:55 crc kubenswrapper[5113]: livez check failed Dec 08 17:44:55 crc kubenswrapper[5113]: I1208 17:44:55.708507 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="57755cc5f99000cc11e193051474d4e2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 17:45:00 crc kubenswrapper[5113]: I1208 17:45:00.620478 5113 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:00 crc kubenswrapper[5113]: I1208 17:45:00.620875 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:00 crc kubenswrapper[5113]: I1208 17:45:00.709853 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:00 crc kubenswrapper[5113]: I1208 17:45:00.713650 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="53cc290b-570b-43d6-a1ff-67ff3acb1ae3" Dec 08 17:45:01 crc kubenswrapper[5113]: I1208 17:45:01.623371 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:01 crc kubenswrapper[5113]: I1208 17:45:01.623439 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:02 crc kubenswrapper[5113]: I1208 17:45:02.629089 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:02 crc kubenswrapper[5113]: I1208 17:45:02.629132 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:02 crc kubenswrapper[5113]: I1208 17:45:02.635097 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:03 crc kubenswrapper[5113]: I1208 17:45:03.541001 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:45:03 crc kubenswrapper[5113]: I1208 17:45:03.541430 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 17:45:03 crc kubenswrapper[5113]: I1208 17:45:03.541513 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 17:45:03 crc kubenswrapper[5113]: I1208 17:45:03.633873 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:03 crc kubenswrapper[5113]: I1208 17:45:03.633909 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:04 crc kubenswrapper[5113]: I1208 17:45:04.702798 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="53cc290b-570b-43d6-a1ff-67ff3acb1ae3" Dec 08 17:45:10 crc kubenswrapper[5113]: I1208 17:45:10.042182 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 17:45:10 crc kubenswrapper[5113]: I1208 17:45:10.572768 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 17:45:10 crc kubenswrapper[5113]: I1208 17:45:10.596121 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 17:45:10 crc kubenswrapper[5113]: I1208 17:45:10.899897 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 17:45:10 crc kubenswrapper[5113]: I1208 17:45:10.908572 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 17:45:11 crc kubenswrapper[5113]: I1208 17:45:11.024545 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 17:45:11 crc kubenswrapper[5113]: I1208 17:45:11.349852 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 17:45:11 crc kubenswrapper[5113]: I1208 17:45:11.488457 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:11 crc kubenswrapper[5113]: I1208 17:45:11.535509 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 17:45:11 crc kubenswrapper[5113]: I1208 17:45:11.733634 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 17:45:11 crc kubenswrapper[5113]: I1208 17:45:11.791466 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:11 crc kubenswrapper[5113]: I1208 17:45:11.871825 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 17:45:12 crc kubenswrapper[5113]: I1208 17:45:12.060365 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 17:45:12 crc kubenswrapper[5113]: I1208 17:45:12.112415 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:12 crc kubenswrapper[5113]: I1208 17:45:12.337792 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 17:45:12 crc kubenswrapper[5113]: I1208 17:45:12.540373 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 17:45:12 crc kubenswrapper[5113]: I1208 17:45:12.697898 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.170757 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.541475 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.541611 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.589241 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.596828 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.621574 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.645320 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.655360 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.749545 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.780381 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 17:45:13 crc kubenswrapper[5113]: I1208 17:45:13.858469 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.027413 5113 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.081004 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.088403 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.126148 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.182560 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.237363 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.432236 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.536513 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.560574 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.634405 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.677802 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.801951 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 17:45:14 crc kubenswrapper[5113]: I1208 17:45:14.971442 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.034951 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.075377 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.135827 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.223188 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.298050 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.307666 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.313057 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.395148 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.450918 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.457512 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.505140 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.575890 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.652830 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.659044 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.729746 5113 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.766599 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.823958 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.856121 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 17:45:15 crc kubenswrapper[5113]: I1208 17:45:15.973281 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:15.976451 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:15.977697 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:15.983289 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.013081 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.156945 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.378052 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.435608 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.444595 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.467436 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.499456 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.562308 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.601593 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.715894 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.718819 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.746787 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.767968 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.854675 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.868475 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 17:45:16 crc kubenswrapper[5113]: I1208 17:45:16.971005 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.068935 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.128739 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.176376 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.250722 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.309201 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.310951 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.573966 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.583196 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.666022 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.704834 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.786322 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.856161 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.866279 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.891895 5113 ???:1] "http: TLS handshake error from 192.168.126.11:53406: no serving certificate available for the kubelet" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.927869 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.934616 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 17:45:17 crc kubenswrapper[5113]: I1208 17:45:17.941259 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.004420 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.052142 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.280067 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.324122 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.352150 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.396829 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.501057 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.577088 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.608263 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.684787 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.720883 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.789375 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.829667 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 17:45:18 crc kubenswrapper[5113]: I1208 17:45:18.891573 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.012251 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.144772 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.161557 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.187909 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.255448 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.260864 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.268483 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.326384 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.359026 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.376583 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.412343 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.514623 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.548230 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.592456 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.631996 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.678286 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.688146 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.753180 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.758320 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.786202 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.855332 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.874506 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.898547 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 17:45:19 crc kubenswrapper[5113]: I1208 17:45:19.932068 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.065140 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.127766 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.141021 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.142791 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.216353 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.216969 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.243296 5113 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.297638 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.422330 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.429956 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.468234 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.548757 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.552711 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.553601 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.605817 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.607718 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.636905 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.650654 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.675250 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.692014 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.759647 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.763516 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.797599 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.813505 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.912584 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 17:45:20 crc kubenswrapper[5113]: I1208 17:45:20.963441 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.047015 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.053198 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.061736 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.075964 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.085400 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.223163 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.248806 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.250539 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.267959 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.298229 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.379530 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.389349 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.395443 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.405515 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.579650 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.623546 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.705369 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.705524 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.709662 5113 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.752836 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.797227 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.844214 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.846512 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.892452 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.927537 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 17:45:21 crc kubenswrapper[5113]: I1208 17:45:21.956276 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.014128 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.098911 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.149030 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.166223 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.184692 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.379683 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.389307 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.445561 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.483092 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.484268 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.504119 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.538771 5113 ???:1] "http: TLS handshake error from 192.168.126.11:44846: no serving certificate available for the kubelet" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.541490 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.596465 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.776442 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.922429 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:22 crc kubenswrapper[5113]: I1208 17:45:22.974534 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.156289 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.250814 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.256080 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.256160 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.256228 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.257057 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6f7c021a2fcc0468a28a7246bb0df375a7b306c4799388b2ae1634b8cdc5d78"} pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.257136 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" containerID="cri-o://f6f7c021a2fcc0468a28a7246bb0df375a7b306c4799388b2ae1634b8cdc5d78" gracePeriod=600 Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.279719 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.377933 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.541542 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.542148 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.542241 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.543622 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"dc8014cd8de21c6ff60f91f04ba6218a7c7a5b7a18e523f7159d9e12e5d8edeb"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.543925 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="kube-controller-manager" containerID="cri-o://dc8014cd8de21c6ff60f91f04ba6218a7c7a5b7a18e523f7159d9e12e5d8edeb" gracePeriod=30 Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.758917 5113 generic.go:358] "Generic (PLEG): container finished" podID="52658507-b084-49cb-a694-f012d44ccc82" containerID="f6f7c021a2fcc0468a28a7246bb0df375a7b306c4799388b2ae1634b8cdc5d78" exitCode=0 Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.758968 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerDied","Data":"f6f7c021a2fcc0468a28a7246bb0df375a7b306c4799388b2ae1634b8cdc5d78"} Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.759055 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"e39046b528925c2cc2211aee5d6a8acef683e38fa51b9e6de30a762333639281"} Dec 08 17:45:23 crc kubenswrapper[5113]: I1208 17:45:23.891175 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.076600 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.128261 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.194764 5113 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.200105 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-dvf7w","openshift-controller-manager/controller-manager-6764f7dfcf-htj8k","openshift-route-controller-manager/route-controller-manager-59b65b8d76-jqdj8","openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.200191 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc","openshift-authentication/oauth-openshift-77574cf746-dxh5x","openshift-kube-apiserver/kube-apiserver-crc","openshift-controller-manager/controller-manager-f765bddb7-4tftp"] Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.200962 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" containerName="route-controller-manager" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.200997 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" containerName="route-controller-manager" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201070 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" containerName="installer" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201085 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" containerName="installer" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201156 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" containerName="controller-manager" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201169 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" containerName="controller-manager" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201201 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" containerName="oauth-openshift" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201212 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" containerName="oauth-openshift" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201382 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" containerName="oauth-openshift" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201376 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201422 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="288a1135-0a6c-4b14-ac02-838923e33cfa" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201404 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" containerName="controller-manager" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201534 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="2bff4f8b-254e-41ed-95ea-6c7a6d137cf8" containerName="installer" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.201551 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" containerName="route-controller-manager" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.260006 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.274253 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.281500 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.355142 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.358545 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.359208 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.359456 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.359637 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.359760 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.359948 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.360339 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.362754 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.363131 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.363139 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.364473 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.364496 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.368729 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.372561 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.376678 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.409549 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.413852 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.414196 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.414513 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.414527 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.414981 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.416428 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.467300 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.467388 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.471265 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.471576 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.471921 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.472676 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.472948 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.473775 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.489664 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.490223 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.490336 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.496875 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.498114 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.498086876 podStartE2EDuration="24.498086876s" podCreationTimestamp="2025-12-08 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:24.493875259 +0000 UTC m=+290.209668425" watchObservedRunningTime="2025-12-08 17:45:24.498086876 +0000 UTC m=+290.213879992" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.508879 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.517743 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9d4ba81-4ee5-4ad7-918e-adba561af172-tmp\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.517856 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-audit-policies\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.517910 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5df42527-de40-4483-8feb-5e6bb090306c-audit-dir\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.517960 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-service-ca\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518004 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-router-certs\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518147 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d4ba81-4ee5-4ad7-918e-adba561af172-client-ca\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518237 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518270 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-login\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518294 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518316 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518333 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrclw\" (UniqueName: \"kubernetes.io/projected/5df42527-de40-4483-8feb-5e6bb090306c-kube-api-access-xrclw\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518353 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-session\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518389 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d4ba81-4ee5-4ad7-918e-adba561af172-serving-cert\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518492 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt846\" (UniqueName: \"kubernetes.io/projected/f9d4ba81-4ee5-4ad7-918e-adba561af172-kube-api-access-zt846\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518541 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518577 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518727 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-error\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518774 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ba81-4ee5-4ad7-918e-adba561af172-config\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.518820 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.619907 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-login\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.619987 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620028 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620122 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xrclw\" (UniqueName: \"kubernetes.io/projected/5df42527-de40-4483-8feb-5e6bb090306c-kube-api-access-xrclw\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620167 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c14b0ae-1def-43c8-b735-29562046c411-serving-cert\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620201 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c14b0ae-1def-43c8-b735-29562046c411-tmp\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620233 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-session\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620279 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d4ba81-4ee5-4ad7-918e-adba561af172-serving-cert\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620326 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zt846\" (UniqueName: \"kubernetes.io/projected/f9d4ba81-4ee5-4ad7-918e-adba561af172-kube-api-access-zt846\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620360 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620394 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620429 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-client-ca\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620468 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-error\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620501 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ba81-4ee5-4ad7-918e-adba561af172-config\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620533 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620580 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9d4ba81-4ee5-4ad7-918e-adba561af172-tmp\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620613 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-audit-policies\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620677 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5df42527-de40-4483-8feb-5e6bb090306c-audit-dir\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620725 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-proxy-ca-bundles\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620766 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-service-ca\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620818 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-router-certs\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620867 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-config\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620897 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d4ba81-4ee5-4ad7-918e-adba561af172-client-ca\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.620950 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5nrz\" (UniqueName: \"kubernetes.io/projected/1c14b0ae-1def-43c8-b735-29562046c411-kube-api-access-v5nrz\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.621001 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.621444 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.622108 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.622317 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d4ba81-4ee5-4ad7-918e-adba561af172-config\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.622608 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5df42527-de40-4483-8feb-5e6bb090306c-audit-dir\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.622788 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-service-ca\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.622872 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9d4ba81-4ee5-4ad7-918e-adba561af172-client-ca\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.623116 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f9d4ba81-4ee5-4ad7-918e-adba561af172-tmp\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.623262 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5df42527-de40-4483-8feb-5e6bb090306c-audit-policies\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.628700 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.628868 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-session\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.629751 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-login\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.630386 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.631360 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-user-template-error\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.633191 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-router-certs\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.633524 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d4ba81-4ee5-4ad7-918e-adba561af172-serving-cert\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.635547 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.635677 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df42527-de40-4483-8feb-5e6bb090306c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.643852 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrclw\" (UniqueName: \"kubernetes.io/projected/5df42527-de40-4483-8feb-5e6bb090306c-kube-api-access-xrclw\") pod \"oauth-openshift-77574cf746-dxh5x\" (UID: \"5df42527-de40-4483-8feb-5e6bb090306c\") " pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.645241 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt846\" (UniqueName: \"kubernetes.io/projected/f9d4ba81-4ee5-4ad7-918e-adba561af172-kube-api-access-zt846\") pod \"route-controller-manager-6ff8c55648-2vpsc\" (UID: \"f9d4ba81-4ee5-4ad7-918e-adba561af172\") " pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.647830 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.684386 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.689717 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c78ecf-976f-4d33-aae1-9d98192f8131" path="/var/lib/kubelet/pods/73c78ecf-976f-4d33-aae1-9d98192f8131/volumes" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.690915 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96a8ac1-0465-4a81-88bf-472026300c81" path="/var/lib/kubelet/pods/c96a8ac1-0465-4a81-88bf-472026300c81/volumes" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.691754 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1038345-8146-4b2e-8678-368bd2c98c99" path="/var/lib/kubelet/pods/f1038345-8146-4b2e-8678-368bd2c98c99/volumes" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.722600 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v5nrz\" (UniqueName: \"kubernetes.io/projected/1c14b0ae-1def-43c8-b735-29562046c411-kube-api-access-v5nrz\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.723017 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c14b0ae-1def-43c8-b735-29562046c411-serving-cert\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.723059 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c14b0ae-1def-43c8-b735-29562046c411-tmp\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.723096 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-client-ca\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.723271 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-proxy-ca-bundles\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.723416 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-config\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.723622 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1c14b0ae-1def-43c8-b735-29562046c411-tmp\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.724712 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-proxy-ca-bundles\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.724748 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-client-ca\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.725243 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c14b0ae-1def-43c8-b735-29562046c411-config\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.728963 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c14b0ae-1def-43c8-b735-29562046c411-serving-cert\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.732031 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.745001 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5nrz\" (UniqueName: \"kubernetes.io/projected/1c14b0ae-1def-43c8-b735-29562046c411-kube-api-access-v5nrz\") pod \"controller-manager-f765bddb7-4tftp\" (UID: \"1c14b0ae-1def-43c8-b735-29562046c411\") " pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:24 crc kubenswrapper[5113]: I1208 17:45:24.789202 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.015453 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.085194 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.086356 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.201978 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77574cf746-dxh5x"] Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.207699 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f765bddb7-4tftp"] Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.216196 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc"] Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.268560 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.354987 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.494619 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77574cf746-dxh5x"] Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.545163 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc"] Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.630358 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f765bddb7-4tftp"] Dec 08 17:45:25 crc kubenswrapper[5113]: W1208 17:45:25.642604 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c14b0ae_1def_43c8_b735_29562046c411.slice/crio-2dc0c960213374d6608b445e162e47f8c8b7e807052630bb1b188dddbab40b61 WatchSource:0}: Error finding container 2dc0c960213374d6608b445e162e47f8c8b7e807052630bb1b188dddbab40b61: Status 404 returned error can't find the container with id 2dc0c960213374d6608b445e162e47f8c8b7e807052630bb1b188dddbab40b61 Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.656583 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.776198 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" event={"ID":"f9d4ba81-4ee5-4ad7-918e-adba561af172","Type":"ContainerStarted","Data":"a6cd83cf84b597ba3713945b6c37a1dde14d93656adb26a122d943ca690a37b0"} Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.777987 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" event={"ID":"5df42527-de40-4483-8feb-5e6bb090306c","Type":"ContainerStarted","Data":"07064c1d378c1a1af81542624fecac4d31530ede7997a000e4688d55db86a76f"} Dec 08 17:45:25 crc kubenswrapper[5113]: I1208 17:45:25.779696 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" event={"ID":"1c14b0ae-1def-43c8-b735-29562046c411","Type":"ContainerStarted","Data":"2dc0c960213374d6608b445e162e47f8c8b7e807052630bb1b188dddbab40b61"} Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.267951 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.445386 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.786792 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" event={"ID":"5df42527-de40-4483-8feb-5e6bb090306c","Type":"ContainerStarted","Data":"e7654a8cd8e799c7460f2a0d08a4e53e3667d89d3a61f84c35752001d120b21c"} Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.787189 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.788802 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" event={"ID":"1c14b0ae-1def-43c8-b735-29562046c411","Type":"ContainerStarted","Data":"09161bc8b782ab2215263bb1f51142445bf70f9dc4de16fc945d6ab7cf2a8e88"} Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.789081 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.790112 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" event={"ID":"f9d4ba81-4ee5-4ad7-918e-adba561af172","Type":"ContainerStarted","Data":"9477241dd417b02e57eea348269d79924faa0c16d3a8ecc37a2b534b5b77b14e"} Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.790818 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.794855 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.797868 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.798419 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.812927 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77574cf746-dxh5x" podStartSLOduration=80.812904769 podStartE2EDuration="1m20.812904769s" podCreationTimestamp="2025-12-08 17:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:26.809775229 +0000 UTC m=+292.525568345" watchObservedRunningTime="2025-12-08 17:45:26.812904769 +0000 UTC m=+292.528697895" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.826365 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ff8c55648-2vpsc" podStartSLOduration=51.826349431 podStartE2EDuration="51.826349431s" podCreationTimestamp="2025-12-08 17:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:26.824606157 +0000 UTC m=+292.540399293" watchObservedRunningTime="2025-12-08 17:45:26.826349431 +0000 UTC m=+292.542142547" Dec 08 17:45:26 crc kubenswrapper[5113]: I1208 17:45:26.862155 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f765bddb7-4tftp" podStartSLOduration=51.862134762 podStartE2EDuration="51.862134762s" podCreationTimestamp="2025-12-08 17:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:45:26.859251169 +0000 UTC m=+292.575044285" watchObservedRunningTime="2025-12-08 17:45:26.862134762 +0000 UTC m=+292.577927878" Dec 08 17:45:27 crc kubenswrapper[5113]: I1208 17:45:27.002949 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 17:45:27 crc kubenswrapper[5113]: I1208 17:45:27.006291 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 17:45:27 crc kubenswrapper[5113]: I1208 17:45:27.646986 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 17:45:33 crc kubenswrapper[5113]: I1208 17:45:33.839932 5113 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 17:45:33 crc kubenswrapper[5113]: I1208 17:45:33.840765 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4" gracePeriod=5 Dec 08 17:45:34 crc kubenswrapper[5113]: I1208 17:45:34.692446 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:45:34 crc kubenswrapper[5113]: I1208 17:45:34.839514 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:45:34 crc kubenswrapper[5113]: I1208 17:45:34.841359 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.409452 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.409991 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.411927 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.430810 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.431544 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.532577 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.532675 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.532739 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.532771 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.532814 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.532838 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.532875 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.533258 5113 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.533284 5113 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.533299 5113 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.533318 5113 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.546118 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.634303 5113 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.872488 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.872549 5113 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4" exitCode=137 Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.872671 5113 scope.go:117] "RemoveContainer" containerID="c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.872681 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.891312 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.902142 5113 scope.go:117] "RemoveContainer" containerID="c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4" Dec 08 17:45:39 crc kubenswrapper[5113]: E1208 17:45:39.902541 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4\": container with ID starting with c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4 not found: ID does not exist" containerID="c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4" Dec 08 17:45:39 crc kubenswrapper[5113]: I1208 17:45:39.902579 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4"} err="failed to get container status \"c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4\": rpc error: code = NotFound desc = could not find container \"c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4\": container with ID starting with c402dd22df3e8adfb8264097621d79aa7cab3cd3e960b20b67ad78523f2bf2e4 not found: ID does not exist" Dec 08 17:45:40 crc kubenswrapper[5113]: I1208 17:45:40.687774 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 17:45:53 crc kubenswrapper[5113]: I1208 17:45:53.659449 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:45:53 crc kubenswrapper[5113]: I1208 17:45:53.966027 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:45:53 crc kubenswrapper[5113]: I1208 17:45:53.968797 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 17:45:53 crc kubenswrapper[5113]: I1208 17:45:53.968931 5113 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="dc8014cd8de21c6ff60f91f04ba6218a7c7a5b7a18e523f7159d9e12e5d8edeb" exitCode=137 Dec 08 17:45:53 crc kubenswrapper[5113]: I1208 17:45:53.969215 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"dc8014cd8de21c6ff60f91f04ba6218a7c7a5b7a18e523f7159d9e12e5d8edeb"} Dec 08 17:45:53 crc kubenswrapper[5113]: I1208 17:45:53.969265 5113 scope.go:117] "RemoveContainer" containerID="a2ce1580ae56a77f8481e370d6da0bd0c53bd71f5a92681837b78707d12f84be" Dec 08 17:45:54 crc kubenswrapper[5113]: I1208 17:45:54.979933 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:45:54 crc kubenswrapper[5113]: I1208 17:45:54.981619 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9baa7dafa9949ba6ccc025c16d6f4bd490c51f595160bd7eb47b0e2e21a55ca8"} Dec 08 17:46:03 crc kubenswrapper[5113]: I1208 17:46:03.541304 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:03 crc kubenswrapper[5113]: I1208 17:46:03.547601 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:04 crc kubenswrapper[5113]: I1208 17:46:04.034566 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:04 crc kubenswrapper[5113]: I1208 17:46:04.041960 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.075574 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg"] Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.076899 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.076914 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.077028 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.127543 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg"] Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.127713 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.130598 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.131135 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.300972 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7dn\" (UniqueName: \"kubernetes.io/projected/12e30287-ae59-4698-a143-0d30748c992c-kube-api-access-zc7dn\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.301096 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12e30287-ae59-4698-a143-0d30748c992c-secret-volume\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.301143 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12e30287-ae59-4698-a143-0d30748c992c-config-volume\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.403067 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7dn\" (UniqueName: \"kubernetes.io/projected/12e30287-ae59-4698-a143-0d30748c992c-kube-api-access-zc7dn\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.403168 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12e30287-ae59-4698-a143-0d30748c992c-secret-volume\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.403368 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12e30287-ae59-4698-a143-0d30748c992c-config-volume\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.405199 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12e30287-ae59-4698-a143-0d30748c992c-config-volume\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.412843 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12e30287-ae59-4698-a143-0d30748c992c-secret-volume\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.422296 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7dn\" (UniqueName: \"kubernetes.io/projected/12e30287-ae59-4698-a143-0d30748c992c-kube-api-access-zc7dn\") pod \"collect-profiles-29420265-429hg\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:14 crc kubenswrapper[5113]: I1208 17:46:14.444374 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:15 crc kubenswrapper[5113]: I1208 17:46:15.013698 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg"] Dec 08 17:46:15 crc kubenswrapper[5113]: I1208 17:46:15.110519 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" event={"ID":"12e30287-ae59-4698-a143-0d30748c992c","Type":"ContainerStarted","Data":"54b52ce28ba889e9ffe9068ed27f850e586ccdd9a799a7947e559f8c0daaf63c"} Dec 08 17:46:16 crc kubenswrapper[5113]: I1208 17:46:16.119112 5113 generic.go:358] "Generic (PLEG): container finished" podID="12e30287-ae59-4698-a143-0d30748c992c" containerID="4806f2ebef31daf06c2b47bce69d76e366f01c7d8005ab5ecd17b741f057f241" exitCode=0 Dec 08 17:46:16 crc kubenswrapper[5113]: I1208 17:46:16.119308 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" event={"ID":"12e30287-ae59-4698-a143-0d30748c992c","Type":"ContainerDied","Data":"4806f2ebef31daf06c2b47bce69d76e366f01c7d8005ab5ecd17b741f057f241"} Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.375803 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.452900 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12e30287-ae59-4698-a143-0d30748c992c-config-volume\") pod \"12e30287-ae59-4698-a143-0d30748c992c\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.452962 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc7dn\" (UniqueName: \"kubernetes.io/projected/12e30287-ae59-4698-a143-0d30748c992c-kube-api-access-zc7dn\") pod \"12e30287-ae59-4698-a143-0d30748c992c\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.453264 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12e30287-ae59-4698-a143-0d30748c992c-secret-volume\") pod \"12e30287-ae59-4698-a143-0d30748c992c\" (UID: \"12e30287-ae59-4698-a143-0d30748c992c\") " Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.453929 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e30287-ae59-4698-a143-0d30748c992c-config-volume" (OuterVolumeSpecName: "config-volume") pod "12e30287-ae59-4698-a143-0d30748c992c" (UID: "12e30287-ae59-4698-a143-0d30748c992c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.467258 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e30287-ae59-4698-a143-0d30748c992c-kube-api-access-zc7dn" (OuterVolumeSpecName: "kube-api-access-zc7dn") pod "12e30287-ae59-4698-a143-0d30748c992c" (UID: "12e30287-ae59-4698-a143-0d30748c992c"). InnerVolumeSpecName "kube-api-access-zc7dn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.468712 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e30287-ae59-4698-a143-0d30748c992c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "12e30287-ae59-4698-a143-0d30748c992c" (UID: "12e30287-ae59-4698-a143-0d30748c992c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.555519 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/12e30287-ae59-4698-a143-0d30748c992c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.555603 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12e30287-ae59-4698-a143-0d30748c992c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:17 crc kubenswrapper[5113]: I1208 17:46:17.555617 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc7dn\" (UniqueName: \"kubernetes.io/projected/12e30287-ae59-4698-a143-0d30748c992c-kube-api-access-zc7dn\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:18 crc kubenswrapper[5113]: I1208 17:46:18.135875 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" Dec 08 17:46:18 crc kubenswrapper[5113]: I1208 17:46:18.135884 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420265-429hg" event={"ID":"12e30287-ae59-4698-a143-0d30748c992c","Type":"ContainerDied","Data":"54b52ce28ba889e9ffe9068ed27f850e586ccdd9a799a7947e559f8c0daaf63c"} Dec 08 17:46:18 crc kubenswrapper[5113]: I1208 17:46:18.135969 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b52ce28ba889e9ffe9068ed27f850e586ccdd9a799a7947e559f8c0daaf63c" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.304240 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x2ww"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.305122 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6x2ww" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="registry-server" containerID="cri-o://88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238" gracePeriod=30 Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.320186 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5vsp"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.320707 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q5vsp" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="registry-server" containerID="cri-o://7fd40df2a3318b992a023fff47fb9d008eb556d42ed9e85d1acf3638770ef810" gracePeriod=30 Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.333488 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bhw9j"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.334058 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" containerID="cri-o://67718f4ac95e1f2c5e512760e576b68511d5bb35af99bde73af38bda7fafb824" gracePeriod=30 Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.344194 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m6gs"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.344600 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m6gs" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="registry-server" containerID="cri-o://8e16d37bfdeb160b414d02568aa162e87665e3ad3bba0d1778d82066e2f9d56c" gracePeriod=30 Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.351559 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2k64"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.353481 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d2k64" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="registry-server" containerID="cri-o://0e2d24c1911873a864450a40e7b78f6b036dd1403cfbc6f15956aa927d915bd2" gracePeriod=30 Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.379498 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2g9kv"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.380155 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="12e30287-ae59-4698-a143-0d30748c992c" containerName="collect-profiles" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.380176 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e30287-ae59-4698-a143-0d30748c992c" containerName="collect-profiles" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.380306 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="12e30287-ae59-4698-a143-0d30748c992c" containerName="collect-profiles" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.497575 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2g9kv"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.497779 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.560427 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afbdd630-9dd6-4ead-a807-ab9287508809-tmp\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.560493 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afbdd630-9dd6-4ead-a807-ab9287508809-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: E1208 17:46:36.560628 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238 is running failed: container process not found" containerID="88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.560924 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afbdd630-9dd6-4ead-a807-ab9287508809-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.560969 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf8ts\" (UniqueName: \"kubernetes.io/projected/afbdd630-9dd6-4ead-a807-ab9287508809-kube-api-access-rf8ts\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: E1208 17:46:36.561020 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238 is running failed: container process not found" containerID="88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:46:36 crc kubenswrapper[5113]: E1208 17:46:36.561624 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238 is running failed: container process not found" containerID="88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:46:36 crc kubenswrapper[5113]: E1208 17:46:36.561681 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-6x2ww" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="registry-server" probeResult="unknown" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.662943 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afbdd630-9dd6-4ead-a807-ab9287508809-tmp\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.663023 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afbdd630-9dd6-4ead-a807-ab9287508809-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.663068 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afbdd630-9dd6-4ead-a807-ab9287508809-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.663111 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rf8ts\" (UniqueName: \"kubernetes.io/projected/afbdd630-9dd6-4ead-a807-ab9287508809-kube-api-access-rf8ts\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.663798 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/afbdd630-9dd6-4ead-a807-ab9287508809-tmp\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.664625 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/afbdd630-9dd6-4ead-a807-ab9287508809-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.671170 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/afbdd630-9dd6-4ead-a807-ab9287508809-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.681633 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf8ts\" (UniqueName: \"kubernetes.io/projected/afbdd630-9dd6-4ead-a807-ab9287508809-kube-api-access-rf8ts\") pod \"marketplace-operator-547dbd544d-2g9kv\" (UID: \"afbdd630-9dd6-4ead-a807-ab9287508809\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:36 crc kubenswrapper[5113]: I1208 17:46:36.903179 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.277445 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2k64" event={"ID":"8be217c9-d60b-4e20-9733-d8011aa40811","Type":"ContainerDied","Data":"0e2d24c1911873a864450a40e7b78f6b036dd1403cfbc6f15956aa927d915bd2"} Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.277420 5113 generic.go:358] "Generic (PLEG): container finished" podID="8be217c9-d60b-4e20-9733-d8011aa40811" containerID="0e2d24c1911873a864450a40e7b78f6b036dd1403cfbc6f15956aa927d915bd2" exitCode=0 Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.286671 5113 generic.go:358] "Generic (PLEG): container finished" podID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerID="8e16d37bfdeb160b414d02568aa162e87665e3ad3bba0d1778d82066e2f9d56c" exitCode=0 Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.286795 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m6gs" event={"ID":"6c70de64-72e0-4f9a-a819-2c1a683e43b7","Type":"ContainerDied","Data":"8e16d37bfdeb160b414d02568aa162e87665e3ad3bba0d1778d82066e2f9d56c"} Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.312244 5113 generic.go:358] "Generic (PLEG): container finished" podID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerID="7fd40df2a3318b992a023fff47fb9d008eb556d42ed9e85d1acf3638770ef810" exitCode=0 Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.312335 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vsp" event={"ID":"d6ee077b-7234-40ba-87fc-f305ca2738e3","Type":"ContainerDied","Data":"7fd40df2a3318b992a023fff47fb9d008eb556d42ed9e85d1acf3638770ef810"} Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.316438 5113 generic.go:358] "Generic (PLEG): container finished" podID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerID="67718f4ac95e1f2c5e512760e576b68511d5bb35af99bde73af38bda7fafb824" exitCode=0 Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.316536 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" event={"ID":"c46cf580-9081-4eac-aee1-1dcd5d7df322","Type":"ContainerDied","Data":"67718f4ac95e1f2c5e512760e576b68511d5bb35af99bde73af38bda7fafb824"} Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.329716 5113 generic.go:358] "Generic (PLEG): container finished" podID="f838eabb-c868-4308-ab80-860767b7bf4a" containerID="88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238" exitCode=0 Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.329789 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x2ww" event={"ID":"f838eabb-c868-4308-ab80-860767b7bf4a","Type":"ContainerDied","Data":"88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238"} Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.379682 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.476198 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-catalog-content\") pod \"f838eabb-c868-4308-ab80-860767b7bf4a\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.476333 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzcn2\" (UniqueName: \"kubernetes.io/projected/f838eabb-c868-4308-ab80-860767b7bf4a-kube-api-access-qzcn2\") pod \"f838eabb-c868-4308-ab80-860767b7bf4a\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.476427 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-utilities\") pod \"f838eabb-c868-4308-ab80-860767b7bf4a\" (UID: \"f838eabb-c868-4308-ab80-860767b7bf4a\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.477922 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-utilities" (OuterVolumeSpecName: "utilities") pod "f838eabb-c868-4308-ab80-860767b7bf4a" (UID: "f838eabb-c868-4308-ab80-860767b7bf4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.481486 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.487927 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f838eabb-c868-4308-ab80-860767b7bf4a-kube-api-access-qzcn2" (OuterVolumeSpecName: "kube-api-access-qzcn2") pod "f838eabb-c868-4308-ab80-860767b7bf4a" (UID: "f838eabb-c868-4308-ab80-860767b7bf4a"). InnerVolumeSpecName "kube-api-access-qzcn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.529194 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f838eabb-c868-4308-ab80-860767b7bf4a" (UID: "f838eabb-c868-4308-ab80-860767b7bf4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.578749 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-catalog-content\") pod \"d6ee077b-7234-40ba-87fc-f305ca2738e3\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.579043 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqz88\" (UniqueName: \"kubernetes.io/projected/d6ee077b-7234-40ba-87fc-f305ca2738e3-kube-api-access-rqz88\") pod \"d6ee077b-7234-40ba-87fc-f305ca2738e3\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.579243 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-utilities\") pod \"d6ee077b-7234-40ba-87fc-f305ca2738e3\" (UID: \"d6ee077b-7234-40ba-87fc-f305ca2738e3\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.581470 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.581507 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qzcn2\" (UniqueName: \"kubernetes.io/projected/f838eabb-c868-4308-ab80-860767b7bf4a-kube-api-access-qzcn2\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.581521 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f838eabb-c868-4308-ab80-860767b7bf4a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.583069 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-utilities" (OuterVolumeSpecName: "utilities") pod "d6ee077b-7234-40ba-87fc-f305ca2738e3" (UID: "d6ee077b-7234-40ba-87fc-f305ca2738e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.594569 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ee077b-7234-40ba-87fc-f305ca2738e3-kube-api-access-rqz88" (OuterVolumeSpecName: "kube-api-access-rqz88") pod "d6ee077b-7234-40ba-87fc-f305ca2738e3" (UID: "d6ee077b-7234-40ba-87fc-f305ca2738e3"). InnerVolumeSpecName "kube-api-access-rqz88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.595650 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.598683 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.599475 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-2g9kv"] Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.604497 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.649386 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6ee077b-7234-40ba-87fc-f305ca2738e3" (UID: "d6ee077b-7234-40ba-87fc-f305ca2738e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.682859 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzb5c\" (UniqueName: \"kubernetes.io/projected/c46cf580-9081-4eac-aee1-1dcd5d7df322-kube-api-access-fzb5c\") pod \"c46cf580-9081-4eac-aee1-1dcd5d7df322\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.682947 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-catalog-content\") pod \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683026 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98p8v\" (UniqueName: \"kubernetes.io/projected/8be217c9-d60b-4e20-9733-d8011aa40811-kube-api-access-98p8v\") pod \"8be217c9-d60b-4e20-9733-d8011aa40811\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683163 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-trusted-ca\") pod \"c46cf580-9081-4eac-aee1-1dcd5d7df322\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683188 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c46cf580-9081-4eac-aee1-1dcd5d7df322-tmp\") pod \"c46cf580-9081-4eac-aee1-1dcd5d7df322\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683205 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-utilities\") pod \"8be217c9-d60b-4e20-9733-d8011aa40811\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683233 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghpvl\" (UniqueName: \"kubernetes.io/projected/6c70de64-72e0-4f9a-a819-2c1a683e43b7-kube-api-access-ghpvl\") pod \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683282 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-operator-metrics\") pod \"c46cf580-9081-4eac-aee1-1dcd5d7df322\" (UID: \"c46cf580-9081-4eac-aee1-1dcd5d7df322\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683350 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-utilities\") pod \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\" (UID: \"6c70de64-72e0-4f9a-a819-2c1a683e43b7\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683438 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-catalog-content\") pod \"8be217c9-d60b-4e20-9733-d8011aa40811\" (UID: \"8be217c9-d60b-4e20-9733-d8011aa40811\") " Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683668 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683685 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6ee077b-7234-40ba-87fc-f305ca2738e3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683697 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rqz88\" (UniqueName: \"kubernetes.io/projected/d6ee077b-7234-40ba-87fc-f305ca2738e3-kube-api-access-rqz88\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.683922 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c46cf580-9081-4eac-aee1-1dcd5d7df322-tmp" (OuterVolumeSpecName: "tmp") pod "c46cf580-9081-4eac-aee1-1dcd5d7df322" (UID: "c46cf580-9081-4eac-aee1-1dcd5d7df322"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.685811 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-utilities" (OuterVolumeSpecName: "utilities") pod "8be217c9-d60b-4e20-9733-d8011aa40811" (UID: "8be217c9-d60b-4e20-9733-d8011aa40811"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.687163 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c46cf580-9081-4eac-aee1-1dcd5d7df322" (UID: "c46cf580-9081-4eac-aee1-1dcd5d7df322"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.688730 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-utilities" (OuterVolumeSpecName: "utilities") pod "6c70de64-72e0-4f9a-a819-2c1a683e43b7" (UID: "6c70de64-72e0-4f9a-a819-2c1a683e43b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.690262 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c70de64-72e0-4f9a-a819-2c1a683e43b7-kube-api-access-ghpvl" (OuterVolumeSpecName: "kube-api-access-ghpvl") pod "6c70de64-72e0-4f9a-a819-2c1a683e43b7" (UID: "6c70de64-72e0-4f9a-a819-2c1a683e43b7"). InnerVolumeSpecName "kube-api-access-ghpvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.694227 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be217c9-d60b-4e20-9733-d8011aa40811-kube-api-access-98p8v" (OuterVolumeSpecName: "kube-api-access-98p8v") pod "8be217c9-d60b-4e20-9733-d8011aa40811" (UID: "8be217c9-d60b-4e20-9733-d8011aa40811"). InnerVolumeSpecName "kube-api-access-98p8v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.695966 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46cf580-9081-4eac-aee1-1dcd5d7df322-kube-api-access-fzb5c" (OuterVolumeSpecName: "kube-api-access-fzb5c") pod "c46cf580-9081-4eac-aee1-1dcd5d7df322" (UID: "c46cf580-9081-4eac-aee1-1dcd5d7df322"). InnerVolumeSpecName "kube-api-access-fzb5c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.697073 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c70de64-72e0-4f9a-a819-2c1a683e43b7" (UID: "6c70de64-72e0-4f9a-a819-2c1a683e43b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.708242 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c46cf580-9081-4eac-aee1-1dcd5d7df322" (UID: "c46cf580-9081-4eac-aee1-1dcd5d7df322"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.783715 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8be217c9-d60b-4e20-9733-d8011aa40811" (UID: "8be217c9-d60b-4e20-9733-d8011aa40811"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784689 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784752 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fzb5c\" (UniqueName: \"kubernetes.io/projected/c46cf580-9081-4eac-aee1-1dcd5d7df322-kube-api-access-fzb5c\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784773 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784784 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98p8v\" (UniqueName: \"kubernetes.io/projected/8be217c9-d60b-4e20-9733-d8011aa40811-kube-api-access-98p8v\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784796 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784808 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c46cf580-9081-4eac-aee1-1dcd5d7df322-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784819 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8be217c9-d60b-4e20-9733-d8011aa40811-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784832 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ghpvl\" (UniqueName: \"kubernetes.io/projected/6c70de64-72e0-4f9a-a819-2c1a683e43b7-kube-api-access-ghpvl\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784844 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c46cf580-9081-4eac-aee1-1dcd5d7df322-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:37 crc kubenswrapper[5113]: I1208 17:46:37.784859 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c70de64-72e0-4f9a-a819-2c1a683e43b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.339494 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6x2ww" event={"ID":"f838eabb-c868-4308-ab80-860767b7bf4a","Type":"ContainerDied","Data":"c0feb019fd14972bde712fe4c9a9ab01ee131fc6854d298bc2ed23085900eda2"} Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.339973 5113 scope.go:117] "RemoveContainer" containerID="88e8a5fdd6f6501a73e1e50748725f686842de5e054fd21a1485b27ab93a1238" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.339887 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6x2ww" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.343823 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" event={"ID":"afbdd630-9dd6-4ead-a807-ab9287508809","Type":"ContainerStarted","Data":"b7d6a0f719befa402f71a866d77d6da6a37609140dde2c4208c2f86306c32b53"} Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.343872 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" event={"ID":"afbdd630-9dd6-4ead-a807-ab9287508809","Type":"ContainerStarted","Data":"6df75cb6ba54aec85457ea446eb359c9fce5daedd1cbddf0d1da91da2f4223bb"} Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.344085 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.350574 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.351243 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2k64" event={"ID":"8be217c9-d60b-4e20-9733-d8011aa40811","Type":"ContainerDied","Data":"eee7af9152938f73873e1bd54deeccbf3b5959856e9d7935cf1f2030dd0704ac"} Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.351449 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2k64" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.354500 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m6gs" event={"ID":"6c70de64-72e0-4f9a-a819-2c1a683e43b7","Type":"ContainerDied","Data":"3a88be7d3ae35bbb7880f2ff9b9ac16d649c7cadebf1dd7ed40a3dac9957936b"} Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.354544 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m6gs" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.357209 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5vsp" event={"ID":"d6ee077b-7234-40ba-87fc-f305ca2738e3","Type":"ContainerDied","Data":"bbe101261d8520d0990091bdefb34ea5aef1aedba757eea838e0d70cedf2f99c"} Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.357331 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5vsp" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.359294 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" event={"ID":"c46cf580-9081-4eac-aee1-1dcd5d7df322","Type":"ContainerDied","Data":"9c2b5b36c0d3edbce4a549d9263fd63fe6c1ac262987cba37a007628205cf471"} Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.359401 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bhw9j" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.365367 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-2g9kv" podStartSLOduration=2.365350458 podStartE2EDuration="2.365350458s" podCreationTimestamp="2025-12-08 17:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:46:38.363268682 +0000 UTC m=+364.079061808" watchObservedRunningTime="2025-12-08 17:46:38.365350458 +0000 UTC m=+364.081143574" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.381821 5113 scope.go:117] "RemoveContainer" containerID="a75330171946fb71accab334bfcdcbdf4fc67fb7c893601cb5b739a8a7ec8d06" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.428642 5113 scope.go:117] "RemoveContainer" containerID="df9f9ae6bc2abba28564a759f0d7a48b417f41024ea9baf7fd255e613cec20c6" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.442724 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2k64"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.457688 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d2k64"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.466198 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m6gs"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.473244 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m6gs"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.475662 5113 scope.go:117] "RemoveContainer" containerID="0e2d24c1911873a864450a40e7b78f6b036dd1403cfbc6f15956aa927d915bd2" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.478730 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6x2ww"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.490164 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6x2ww"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.493290 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bhw9j"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.496379 5113 scope.go:117] "RemoveContainer" containerID="a0035579160b1c007b9a20537bf681e17ec26c3c1dee2793168456012776ae75" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.497318 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bhw9j"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.500362 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5vsp"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.509796 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q5vsp"] Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.512162 5113 scope.go:117] "RemoveContainer" containerID="aabbbdd34b56782314e6c887b022f558a27e610e3e24fd6a7f86173456df397d" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.532426 5113 scope.go:117] "RemoveContainer" containerID="8e16d37bfdeb160b414d02568aa162e87665e3ad3bba0d1778d82066e2f9d56c" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.547915 5113 scope.go:117] "RemoveContainer" containerID="25c1df993569d361b2d5dcbf6d05f876a0f69d30301c681df260e4b27c1dbfec" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.562478 5113 scope.go:117] "RemoveContainer" containerID="f33cf794bf762044c6db6b82798f2f9a322d27bb049266ba60dc4727ba1d4577" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.581289 5113 scope.go:117] "RemoveContainer" containerID="7fd40df2a3318b992a023fff47fb9d008eb556d42ed9e85d1acf3638770ef810" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.602107 5113 scope.go:117] "RemoveContainer" containerID="6b54997d97a14623a25dc5ef9cf8e654f99329e7742b4081fd1627ea8feba5f9" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.618778 5113 scope.go:117] "RemoveContainer" containerID="7a50db9482b9a63c9f7859a61a3c7ea19b17669bd354c6b9bac01f6568dad44d" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.635553 5113 scope.go:117] "RemoveContainer" containerID="67718f4ac95e1f2c5e512760e576b68511d5bb35af99bde73af38bda7fafb824" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.688424 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" path="/var/lib/kubelet/pods/6c70de64-72e0-4f9a-a819-2c1a683e43b7/volumes" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.689319 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" path="/var/lib/kubelet/pods/8be217c9-d60b-4e20-9733-d8011aa40811/volumes" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.690256 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" path="/var/lib/kubelet/pods/c46cf580-9081-4eac-aee1-1dcd5d7df322/volumes" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.691434 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" path="/var/lib/kubelet/pods/d6ee077b-7234-40ba-87fc-f305ca2738e3/volumes" Dec 08 17:46:38 crc kubenswrapper[5113]: I1208 17:46:38.692152 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" path="/var/lib/kubelet/pods/f838eabb-c868-4308-ab80-860767b7bf4a/volumes" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.139787 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t7dpp"] Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.140963 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.140979 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.140987 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.140993 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141007 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141014 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141023 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141029 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141054 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141077 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141088 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141094 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141102 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141107 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141117 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141123 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141132 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141137 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141143 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141148 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="extract-content" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141158 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141163 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141173 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141181 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="extract-utilities" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141189 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141195 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141296 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f838eabb-c868-4308-ab80-860767b7bf4a" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141305 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c70de64-72e0-4f9a-a819-2c1a683e43b7" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141317 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="d6ee077b-7234-40ba-87fc-f305ca2738e3" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141383 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="c46cf580-9081-4eac-aee1-1dcd5d7df322" containerName="marketplace-operator" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.141395 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="8be217c9-d60b-4e20-9733-d8011aa40811" containerName="registry-server" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.146806 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.148927 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.154095 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7dpp"] Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.233824 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2223f883-f124-4e7d-a809-e4ecb7a340aa-catalog-content\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.233909 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtpgr\" (UniqueName: \"kubernetes.io/projected/2223f883-f124-4e7d-a809-e4ecb7a340aa-kube-api-access-xtpgr\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.234240 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2223f883-f124-4e7d-a809-e4ecb7a340aa-utilities\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.336478 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2223f883-f124-4e7d-a809-e4ecb7a340aa-catalog-content\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.336557 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xtpgr\" (UniqueName: \"kubernetes.io/projected/2223f883-f124-4e7d-a809-e4ecb7a340aa-kube-api-access-xtpgr\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.336605 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2223f883-f124-4e7d-a809-e4ecb7a340aa-utilities\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.337286 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2223f883-f124-4e7d-a809-e4ecb7a340aa-catalog-content\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.337345 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2223f883-f124-4e7d-a809-e4ecb7a340aa-utilities\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.392162 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtpgr\" (UniqueName: \"kubernetes.io/projected/2223f883-f124-4e7d-a809-e4ecb7a340aa-kube-api-access-xtpgr\") pod \"certified-operators-t7dpp\" (UID: \"2223f883-f124-4e7d-a809-e4ecb7a340aa\") " pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.482644 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:40 crc kubenswrapper[5113]: I1208 17:46:40.703641 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7dpp"] Dec 08 17:46:40 crc kubenswrapper[5113]: W1208 17:46:40.724071 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2223f883_f124_4e7d_a809_e4ecb7a340aa.slice/crio-7dab3a5cea16c93481eb12ca100f8f4418f5e5753c82332e180c830d36b97eff WatchSource:0}: Error finding container 7dab3a5cea16c93481eb12ca100f8f4418f5e5753c82332e180c830d36b97eff: Status 404 returned error can't find the container with id 7dab3a5cea16c93481eb12ca100f8f4418f5e5753c82332e180c830d36b97eff Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.142097 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gvdg5"] Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.147256 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.165203 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.176461 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvdg5"] Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.248965 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95bn6\" (UniqueName: \"kubernetes.io/projected/0dc56993-df8e-430a-86c7-8942114fd9f8-kube-api-access-95bn6\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.249885 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc56993-df8e-430a-86c7-8942114fd9f8-utilities\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.250094 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc56993-df8e-430a-86c7-8942114fd9f8-catalog-content\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.351700 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-95bn6\" (UniqueName: \"kubernetes.io/projected/0dc56993-df8e-430a-86c7-8942114fd9f8-kube-api-access-95bn6\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.351786 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc56993-df8e-430a-86c7-8942114fd9f8-utilities\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.351845 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc56993-df8e-430a-86c7-8942114fd9f8-catalog-content\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.352506 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc56993-df8e-430a-86c7-8942114fd9f8-catalog-content\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.353246 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc56993-df8e-430a-86c7-8942114fd9f8-utilities\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.375380 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-95bn6\" (UniqueName: \"kubernetes.io/projected/0dc56993-df8e-430a-86c7-8942114fd9f8-kube-api-access-95bn6\") pod \"community-operators-gvdg5\" (UID: \"0dc56993-df8e-430a-86c7-8942114fd9f8\") " pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.404888 5113 generic.go:358] "Generic (PLEG): container finished" podID="2223f883-f124-4e7d-a809-e4ecb7a340aa" containerID="226208fe6f4da9be6d6eeaed445543f707bc3251f3777f0055298d2db4810bf8" exitCode=0 Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.405240 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7dpp" event={"ID":"2223f883-f124-4e7d-a809-e4ecb7a340aa","Type":"ContainerDied","Data":"226208fe6f4da9be6d6eeaed445543f707bc3251f3777f0055298d2db4810bf8"} Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.405277 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7dpp" event={"ID":"2223f883-f124-4e7d-a809-e4ecb7a340aa","Type":"ContainerStarted","Data":"7dab3a5cea16c93481eb12ca100f8f4418f5e5753c82332e180c830d36b97eff"} Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.465269 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:41 crc kubenswrapper[5113]: I1208 17:46:41.673740 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gvdg5"] Dec 08 17:46:41 crc kubenswrapper[5113]: W1208 17:46:41.677474 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dc56993_df8e_430a_86c7_8942114fd9f8.slice/crio-cf661e0a1db5f08bee8508ca7e55b79e41300c4304b029fee7d9bedfe2620632 WatchSource:0}: Error finding container cf661e0a1db5f08bee8508ca7e55b79e41300c4304b029fee7d9bedfe2620632: Status 404 returned error can't find the container with id cf661e0a1db5f08bee8508ca7e55b79e41300c4304b029fee7d9bedfe2620632 Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.418775 5113 generic.go:358] "Generic (PLEG): container finished" podID="0dc56993-df8e-430a-86c7-8942114fd9f8" containerID="29e304029a8986cb2ddeb73b55d0eca0a1d331259a64d9916cf8f282c75a3176" exitCode=0 Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.418949 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvdg5" event={"ID":"0dc56993-df8e-430a-86c7-8942114fd9f8","Type":"ContainerDied","Data":"29e304029a8986cb2ddeb73b55d0eca0a1d331259a64d9916cf8f282c75a3176"} Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.419507 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvdg5" event={"ID":"0dc56993-df8e-430a-86c7-8942114fd9f8","Type":"ContainerStarted","Data":"cf661e0a1db5f08bee8508ca7e55b79e41300c4304b029fee7d9bedfe2620632"} Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.541989 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqqpz"] Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.549538 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.550343 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqqpz"] Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.554935 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.674158 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-utilities\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.674241 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phmq9\" (UniqueName: \"kubernetes.io/projected/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-kube-api-access-phmq9\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.674512 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-catalog-content\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.776285 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-catalog-content\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.777109 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-utilities\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.777156 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phmq9\" (UniqueName: \"kubernetes.io/projected/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-kube-api-access-phmq9\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.777177 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-utilities\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.777340 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-catalog-content\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.799196 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phmq9\" (UniqueName: \"kubernetes.io/projected/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-kube-api-access-phmq9\") pod \"redhat-marketplace-hqqpz\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:42 crc kubenswrapper[5113]: I1208 17:46:42.876299 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.100188 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqqpz"] Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.429093 5113 generic.go:358] "Generic (PLEG): container finished" podID="0dc56993-df8e-430a-86c7-8942114fd9f8" containerID="66400f471dba6d132b299bd06a63a7dde157f5d19fba238109b26c67cbc6b308" exitCode=0 Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.429239 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvdg5" event={"ID":"0dc56993-df8e-430a-86c7-8942114fd9f8","Type":"ContainerDied","Data":"66400f471dba6d132b299bd06a63a7dde157f5d19fba238109b26c67cbc6b308"} Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.447137 5113 generic.go:358] "Generic (PLEG): container finished" podID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerID="0f8b6cb4beb47ae8d32f0e5188307554ba28bea440b1f9471e280e7693117dca" exitCode=0 Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.447229 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqqpz" event={"ID":"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef","Type":"ContainerDied","Data":"0f8b6cb4beb47ae8d32f0e5188307554ba28bea440b1f9471e280e7693117dca"} Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.447353 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqqpz" event={"ID":"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef","Type":"ContainerStarted","Data":"13c7f2da6f4e58e9090c055eae2029749017dc6c2f3789f0360a6972404ff3ae"} Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.936965 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkh7h"] Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.947436 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.950899 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 17:46:43 crc kubenswrapper[5113]: I1208 17:46:43.954372 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkh7h"] Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.098051 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7r7q\" (UniqueName: \"kubernetes.io/projected/998917ca-cb0e-434f-88cc-1a9d0d26a429-kube-api-access-t7r7q\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.098124 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998917ca-cb0e-434f-88cc-1a9d0d26a429-catalog-content\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.098188 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998917ca-cb0e-434f-88cc-1a9d0d26a429-utilities\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.199239 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t7r7q\" (UniqueName: \"kubernetes.io/projected/998917ca-cb0e-434f-88cc-1a9d0d26a429-kube-api-access-t7r7q\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.199314 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998917ca-cb0e-434f-88cc-1a9d0d26a429-catalog-content\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.199372 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998917ca-cb0e-434f-88cc-1a9d0d26a429-utilities\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.199982 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998917ca-cb0e-434f-88cc-1a9d0d26a429-utilities\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.201025 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998917ca-cb0e-434f-88cc-1a9d0d26a429-catalog-content\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.237636 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7r7q\" (UniqueName: \"kubernetes.io/projected/998917ca-cb0e-434f-88cc-1a9d0d26a429-kube-api-access-t7r7q\") pod \"redhat-operators-wkh7h\" (UID: \"998917ca-cb0e-434f-88cc-1a9d0d26a429\") " pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.271741 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:44 crc kubenswrapper[5113]: I1208 17:46:44.655560 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkh7h"] Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.472747 5113 generic.go:358] "Generic (PLEG): container finished" podID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerID="d95266da31d294a8e5f9e8f0e8e80853a69e04ea4c5ccd29b03bfbd4071801a2" exitCode=0 Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.472922 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqqpz" event={"ID":"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef","Type":"ContainerDied","Data":"d95266da31d294a8e5f9e8f0e8e80853a69e04ea4c5ccd29b03bfbd4071801a2"} Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.485463 5113 generic.go:358] "Generic (PLEG): container finished" podID="2223f883-f124-4e7d-a809-e4ecb7a340aa" containerID="e04f661b3821e5bd6758cb371534cf2b02da5ebbeb6036062c4dd2b2dd4fa921" exitCode=0 Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.486605 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7dpp" event={"ID":"2223f883-f124-4e7d-a809-e4ecb7a340aa","Type":"ContainerDied","Data":"e04f661b3821e5bd6758cb371534cf2b02da5ebbeb6036062c4dd2b2dd4fa921"} Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.495443 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gvdg5" event={"ID":"0dc56993-df8e-430a-86c7-8942114fd9f8","Type":"ContainerStarted","Data":"d282ded2070dcea18a35debb2131da453bad9fd12d02d88fb6d5f2b6cc7927c7"} Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.504551 5113 generic.go:358] "Generic (PLEG): container finished" podID="998917ca-cb0e-434f-88cc-1a9d0d26a429" containerID="1e8b4cc0237f4c4d3d8179d53ce255b648a1a6dc109a3b01bf334b57bff29048" exitCode=0 Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.504621 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkh7h" event={"ID":"998917ca-cb0e-434f-88cc-1a9d0d26a429","Type":"ContainerDied","Data":"1e8b4cc0237f4c4d3d8179d53ce255b648a1a6dc109a3b01bf334b57bff29048"} Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.504694 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkh7h" event={"ID":"998917ca-cb0e-434f-88cc-1a9d0d26a429","Type":"ContainerStarted","Data":"a9f71f7d2d746f192a09d0547a9842ea50d1750a1389f50b40d73b3b5b4bf87f"} Dec 08 17:46:45 crc kubenswrapper[5113]: I1208 17:46:45.594314 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gvdg5" podStartSLOduration=4.088501877 podStartE2EDuration="4.594291524s" podCreationTimestamp="2025-12-08 17:46:41 +0000 UTC" firstStartedPulling="2025-12-08 17:46:42.420145429 +0000 UTC m=+368.135938545" lastFinishedPulling="2025-12-08 17:46:42.925935076 +0000 UTC m=+368.641728192" observedRunningTime="2025-12-08 17:46:45.589055923 +0000 UTC m=+371.304849049" watchObservedRunningTime="2025-12-08 17:46:45.594291524 +0000 UTC m=+371.310084640" Dec 08 17:46:46 crc kubenswrapper[5113]: I1208 17:46:46.512992 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkh7h" event={"ID":"998917ca-cb0e-434f-88cc-1a9d0d26a429","Type":"ContainerStarted","Data":"2cd6d913bd1f0b0025679cfb7bd2d6f89affdc6a1800d835112518c87fd6df9e"} Dec 08 17:46:46 crc kubenswrapper[5113]: I1208 17:46:46.516451 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqqpz" event={"ID":"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef","Type":"ContainerStarted","Data":"2e748e32b337bde74f69f3b539a6c00242e63f5b76e33ce1ed8e5dff11349061"} Dec 08 17:46:46 crc kubenswrapper[5113]: I1208 17:46:46.518871 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7dpp" event={"ID":"2223f883-f124-4e7d-a809-e4ecb7a340aa","Type":"ContainerStarted","Data":"1d9c561b5f519cc50a929fea95b56e66815946b906c35f82859eda8447852338"} Dec 08 17:46:46 crc kubenswrapper[5113]: I1208 17:46:46.591583 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t7dpp" podStartSLOduration=3.406791973 podStartE2EDuration="6.591559363s" podCreationTimestamp="2025-12-08 17:46:40 +0000 UTC" firstStartedPulling="2025-12-08 17:46:41.406156171 +0000 UTC m=+367.121949287" lastFinishedPulling="2025-12-08 17:46:44.590923561 +0000 UTC m=+370.306716677" observedRunningTime="2025-12-08 17:46:46.56016203 +0000 UTC m=+372.275955166" watchObservedRunningTime="2025-12-08 17:46:46.591559363 +0000 UTC m=+372.307352479" Dec 08 17:46:46 crc kubenswrapper[5113]: I1208 17:46:46.600701 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqqpz" podStartSLOduration=3.490335313 podStartE2EDuration="4.600646207s" podCreationTimestamp="2025-12-08 17:46:42 +0000 UTC" firstStartedPulling="2025-12-08 17:46:43.454753182 +0000 UTC m=+369.170546298" lastFinishedPulling="2025-12-08 17:46:44.565064076 +0000 UTC m=+370.280857192" observedRunningTime="2025-12-08 17:46:46.590370251 +0000 UTC m=+372.306163367" watchObservedRunningTime="2025-12-08 17:46:46.600646207 +0000 UTC m=+372.316439323" Dec 08 17:46:47 crc kubenswrapper[5113]: I1208 17:46:47.529278 5113 generic.go:358] "Generic (PLEG): container finished" podID="998917ca-cb0e-434f-88cc-1a9d0d26a429" containerID="2cd6d913bd1f0b0025679cfb7bd2d6f89affdc6a1800d835112518c87fd6df9e" exitCode=0 Dec 08 17:46:47 crc kubenswrapper[5113]: I1208 17:46:47.529417 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkh7h" event={"ID":"998917ca-cb0e-434f-88cc-1a9d0d26a429","Type":"ContainerDied","Data":"2cd6d913bd1f0b0025679cfb7bd2d6f89affdc6a1800d835112518c87fd6df9e"} Dec 08 17:46:49 crc kubenswrapper[5113]: I1208 17:46:49.112298 5113 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 17:46:49 crc kubenswrapper[5113]: I1208 17:46:49.550026 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkh7h" event={"ID":"998917ca-cb0e-434f-88cc-1a9d0d26a429","Type":"ContainerStarted","Data":"7e439d064535500cb632651ad79a5663add5029173dabad2a38c965a56daf471"} Dec 08 17:46:49 crc kubenswrapper[5113]: I1208 17:46:49.575639 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkh7h" podStartSLOduration=6.040837807 podStartE2EDuration="6.575614682s" podCreationTimestamp="2025-12-08 17:46:43 +0000 UTC" firstStartedPulling="2025-12-08 17:46:45.505486989 +0000 UTC m=+371.221280105" lastFinishedPulling="2025-12-08 17:46:46.040263874 +0000 UTC m=+371.756056980" observedRunningTime="2025-12-08 17:46:49.574355098 +0000 UTC m=+375.290148234" watchObservedRunningTime="2025-12-08 17:46:49.575614682 +0000 UTC m=+375.291407808" Dec 08 17:46:50 crc kubenswrapper[5113]: I1208 17:46:50.482797 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:50 crc kubenswrapper[5113]: I1208 17:46:50.483363 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:50 crc kubenswrapper[5113]: I1208 17:46:50.536085 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:50 crc kubenswrapper[5113]: I1208 17:46:50.605356 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t7dpp" Dec 08 17:46:51 crc kubenswrapper[5113]: I1208 17:46:51.466172 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:51 crc kubenswrapper[5113]: I1208 17:46:51.466848 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:51 crc kubenswrapper[5113]: I1208 17:46:51.513717 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:51 crc kubenswrapper[5113]: I1208 17:46:51.615082 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gvdg5" Dec 08 17:46:52 crc kubenswrapper[5113]: I1208 17:46:52.877193 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:52 crc kubenswrapper[5113]: I1208 17:46:52.877282 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:52 crc kubenswrapper[5113]: I1208 17:46:52.929754 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:53 crc kubenswrapper[5113]: I1208 17:46:53.619249 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:46:54 crc kubenswrapper[5113]: I1208 17:46:54.272489 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:54 crc kubenswrapper[5113]: I1208 17:46:54.272564 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:54 crc kubenswrapper[5113]: I1208 17:46:54.325349 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:46:54 crc kubenswrapper[5113]: I1208 17:46:54.628029 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkh7h" Dec 08 17:47:23 crc kubenswrapper[5113]: I1208 17:47:23.256333 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:47:23 crc kubenswrapper[5113]: I1208 17:47:23.258495 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:47:53 crc kubenswrapper[5113]: I1208 17:47:53.256182 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:47:53 crc kubenswrapper[5113]: I1208 17:47:53.257258 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:48:23 crc kubenswrapper[5113]: I1208 17:48:23.256389 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:48:23 crc kubenswrapper[5113]: I1208 17:48:23.257247 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:48:23 crc kubenswrapper[5113]: I1208 17:48:23.257318 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:48:23 crc kubenswrapper[5113]: I1208 17:48:23.259319 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e39046b528925c2cc2211aee5d6a8acef683e38fa51b9e6de30a762333639281"} pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:48:23 crc kubenswrapper[5113]: I1208 17:48:23.259464 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" containerID="cri-o://e39046b528925c2cc2211aee5d6a8acef683e38fa51b9e6de30a762333639281" gracePeriod=600 Dec 08 17:48:24 crc kubenswrapper[5113]: I1208 17:48:24.188808 5113 generic.go:358] "Generic (PLEG): container finished" podID="52658507-b084-49cb-a694-f012d44ccc82" containerID="e39046b528925c2cc2211aee5d6a8acef683e38fa51b9e6de30a762333639281" exitCode=0 Dec 08 17:48:24 crc kubenswrapper[5113]: I1208 17:48:24.189113 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerDied","Data":"e39046b528925c2cc2211aee5d6a8acef683e38fa51b9e6de30a762333639281"} Dec 08 17:48:24 crc kubenswrapper[5113]: I1208 17:48:24.190190 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"6354552aeb6257facad872f00416b46d71ae4e5554416dec9e0813960cf8c0f8"} Dec 08 17:48:24 crc kubenswrapper[5113]: I1208 17:48:24.190235 5113 scope.go:117] "RemoveContainer" containerID="f6f7c021a2fcc0468a28a7246bb0df375a7b306c4799388b2ae1634b8cdc5d78" Dec 08 17:48:47 crc kubenswrapper[5113]: I1208 17:48:46.999710 5113 scope.go:117] "RemoveContainer" containerID="0dd538565877321c39023adc4ffe8860e82713adbb30fbc59eb10dc32a4bfb10" Dec 08 17:50:23 crc kubenswrapper[5113]: I1208 17:50:23.255981 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:50:23 crc kubenswrapper[5113]: I1208 17:50:23.256764 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:50:34 crc kubenswrapper[5113]: I1208 17:50:34.941297 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:50:34 crc kubenswrapper[5113]: I1208 17:50:34.942561 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:50:50 crc kubenswrapper[5113]: I1208 17:50:50.246195 5113 ???:1] "http: TLS handshake error from 192.168.126.11:35254: no serving certificate available for the kubelet" Dec 08 17:50:53 crc kubenswrapper[5113]: I1208 17:50:53.256334 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:50:53 crc kubenswrapper[5113]: I1208 17:50:53.256524 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:51:23 crc kubenswrapper[5113]: I1208 17:51:23.256197 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:51:23 crc kubenswrapper[5113]: I1208 17:51:23.257162 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:51:23 crc kubenswrapper[5113]: I1208 17:51:23.257241 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:51:23 crc kubenswrapper[5113]: I1208 17:51:23.258347 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6354552aeb6257facad872f00416b46d71ae4e5554416dec9e0813960cf8c0f8"} pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:51:23 crc kubenswrapper[5113]: I1208 17:51:23.258521 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" containerID="cri-o://6354552aeb6257facad872f00416b46d71ae4e5554416dec9e0813960cf8c0f8" gracePeriod=600 Dec 08 17:51:23 crc kubenswrapper[5113]: I1208 17:51:23.406496 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:51:24 crc kubenswrapper[5113]: I1208 17:51:24.389895 5113 generic.go:358] "Generic (PLEG): container finished" podID="52658507-b084-49cb-a694-f012d44ccc82" containerID="6354552aeb6257facad872f00416b46d71ae4e5554416dec9e0813960cf8c0f8" exitCode=0 Dec 08 17:51:24 crc kubenswrapper[5113]: I1208 17:51:24.389994 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerDied","Data":"6354552aeb6257facad872f00416b46d71ae4e5554416dec9e0813960cf8c0f8"} Dec 08 17:51:24 crc kubenswrapper[5113]: I1208 17:51:24.390695 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"e91e9c2d7b1e37ebd3bc5750a4f89f644abb6b97e12e01ad60b986cb9a1422b5"} Dec 08 17:51:24 crc kubenswrapper[5113]: I1208 17:51:24.390750 5113 scope.go:117] "RemoveContainer" containerID="e39046b528925c2cc2211aee5d6a8acef683e38fa51b9e6de30a762333639281" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.359111 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq"] Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.360222 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="kube-rbac-proxy" containerID="cri-o://f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.360323 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="ovnkube-cluster-manager" containerID="cri-o://496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.573475 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pjxmr"] Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.574189 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-controller" containerID="cri-o://3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.574682 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="sbdb" containerID="cri-o://fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.574751 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="nbdb" containerID="cri-o://1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.574793 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="northd" containerID="cri-o://cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.574827 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.574863 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-node" containerID="cri-o://b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.574899 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-acl-logging" containerID="cri-o://39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.585621 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.590878 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8h7r\" (UniqueName: \"kubernetes.io/projected/88405869-34c6-458b-ab82-663f9a965335-kube-api-access-r8h7r\") pod \"88405869-34c6-458b-ab82-663f9a965335\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.590948 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert\") pod \"88405869-34c6-458b-ab82-663f9a965335\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.590976 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config\") pod \"88405869-34c6-458b-ab82-663f9a965335\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.591102 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides\") pod \"88405869-34c6-458b-ab82-663f9a965335\" (UID: \"88405869-34c6-458b-ab82-663f9a965335\") " Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.591954 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "88405869-34c6-458b-ab82-663f9a965335" (UID: "88405869-34c6-458b-ab82-663f9a965335"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.592160 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "88405869-34c6-458b-ab82-663f9a965335" (UID: "88405869-34c6-458b-ab82-663f9a965335"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.602780 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88405869-34c6-458b-ab82-663f9a965335-kube-api-access-r8h7r" (OuterVolumeSpecName: "kube-api-access-r8h7r") pod "88405869-34c6-458b-ab82-663f9a965335" (UID: "88405869-34c6-458b-ab82-663f9a965335"). InnerVolumeSpecName "kube-api-access-r8h7r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.605177 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "88405869-34c6-458b-ab82-663f9a965335" (UID: "88405869-34c6-458b-ab82-663f9a965335"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.619109 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2"] Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.619854 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="ovnkube-cluster-manager" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.619880 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="ovnkube-cluster-manager" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.619933 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="kube-rbac-proxy" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.619943 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="kube-rbac-proxy" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.620078 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="ovnkube-cluster-manager" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.620095 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="88405869-34c6-458b-ab82-663f9a965335" containerName="kube-rbac-proxy" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.626947 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.626948 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovnkube-controller" containerID="cri-o://6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" gracePeriod=30 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692299 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94ea0341-60e0-4bf4-b855-193c43cc6711-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692372 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94ea0341-60e0-4bf4-b855-193c43cc6711-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692500 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94ea0341-60e0-4bf4-b855-193c43cc6711-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692742 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plrsr\" (UniqueName: \"kubernetes.io/projected/94ea0341-60e0-4bf4-b855-193c43cc6711-kube-api-access-plrsr\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692890 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r8h7r\" (UniqueName: \"kubernetes.io/projected/88405869-34c6-458b-ab82-663f9a965335-kube-api-access-r8h7r\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692924 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88405869-34c6-458b-ab82-663f9a965335-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692947 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.692978 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88405869-34c6-458b-ab82-663f9a965335-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.734443 5113 generic.go:358] "Generic (PLEG): container finished" podID="88405869-34c6-458b-ab82-663f9a965335" containerID="496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4" exitCode=0 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.734478 5113 generic.go:358] "Generic (PLEG): container finished" podID="88405869-34c6-458b-ab82-663f9a965335" containerID="f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300" exitCode=0 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.734553 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" event={"ID":"88405869-34c6-458b-ab82-663f9a965335","Type":"ContainerDied","Data":"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.734587 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" event={"ID":"88405869-34c6-458b-ab82-663f9a965335","Type":"ContainerDied","Data":"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.734600 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" event={"ID":"88405869-34c6-458b-ab82-663f9a965335","Type":"ContainerDied","Data":"279975528b5dd4bc1a6e47833b55d0e841546efddd5d2db6ec25ff2c8fa4b017"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.734622 5113 scope.go:117] "RemoveContainer" containerID="496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.734791 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.745089 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pjxmr_150992a3-efc5-4dc2-a696-390ea843f8c4/ovn-acl-logging/0.log" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.747768 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pjxmr_150992a3-efc5-4dc2-a696-390ea843f8c4/ovn-controller/0.log" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748148 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576" exitCode=0 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748172 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef" exitCode=0 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748179 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20" exitCode=0 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748187 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade" exitCode=143 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748194 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633" exitCode=143 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748250 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748301 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748326 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748349 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.748360 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.754283 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g9mkp_c4621882-3d98-4910-9263-5959d2302427/kube-multus/0.log" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.754331 5113 generic.go:358] "Generic (PLEG): container finished" podID="c4621882-3d98-4910-9263-5959d2302427" containerID="2bb12e71b3d999c009aafdef8c275c416a63bd71bfcab8ce7926b55d3bb95371" exitCode=2 Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.754363 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g9mkp" event={"ID":"c4621882-3d98-4910-9263-5959d2302427","Type":"ContainerDied","Data":"2bb12e71b3d999c009aafdef8c275c416a63bd71bfcab8ce7926b55d3bb95371"} Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.754930 5113 scope.go:117] "RemoveContainer" containerID="2bb12e71b3d999c009aafdef8c275c416a63bd71bfcab8ce7926b55d3bb95371" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.757711 5113 scope.go:117] "RemoveContainer" containerID="f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.793725 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94ea0341-60e0-4bf4-b855-193c43cc6711-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.793795 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94ea0341-60e0-4bf4-b855-193c43cc6711-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.793864 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plrsr\" (UniqueName: \"kubernetes.io/projected/94ea0341-60e0-4bf4-b855-193c43cc6711-kube-api-access-plrsr\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.793919 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94ea0341-60e0-4bf4-b855-193c43cc6711-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.795306 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/94ea0341-60e0-4bf4-b855-193c43cc6711-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.795849 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/94ea0341-60e0-4bf4-b855-193c43cc6711-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.807871 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq"] Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.809217 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/94ea0341-60e0-4bf4-b855-193c43cc6711-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.821332 5113 scope.go:117] "RemoveContainer" containerID="496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4" Dec 08 17:52:19 crc kubenswrapper[5113]: E1208 17:52:19.823466 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4\": container with ID starting with 496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4 not found: ID does not exist" containerID="496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.823517 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4"} err="failed to get container status \"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4\": rpc error: code = NotFound desc = could not find container \"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4\": container with ID starting with 496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4 not found: ID does not exist" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.823546 5113 scope.go:117] "RemoveContainer" containerID="f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.825756 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plrsr\" (UniqueName: \"kubernetes.io/projected/94ea0341-60e0-4bf4-b855-193c43cc6711-kube-api-access-plrsr\") pod \"ovnkube-control-plane-97c9b6c48-pnnm2\" (UID: \"94ea0341-60e0-4bf4-b855-193c43cc6711\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:19 crc kubenswrapper[5113]: E1208 17:52:19.826470 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300\": container with ID starting with f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300 not found: ID does not exist" containerID="f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.826517 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300"} err="failed to get container status \"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300\": rpc error: code = NotFound desc = could not find container \"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300\": container with ID starting with f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300 not found: ID does not exist" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.826546 5113 scope.go:117] "RemoveContainer" containerID="496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.826787 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4"} err="failed to get container status \"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4\": rpc error: code = NotFound desc = could not find container \"496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4\": container with ID starting with 496d9cca2e2323e517c80f28091223546b9cd5a92fa6f026ca9864bc19350ae4 not found: ID does not exist" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.826810 5113 scope.go:117] "RemoveContainer" containerID="f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.826997 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300"} err="failed to get container status \"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300\": rpc error: code = NotFound desc = could not find container \"f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300\": container with ID starting with f466f9b43bbd15bfcd6c08c44bd171d89b71be34d74a18eb58eaf6dd4dda0300 not found: ID does not exist" Dec 08 17:52:19 crc kubenswrapper[5113]: I1208 17:52:19.842780 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-k6xbq"] Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.006072 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.318171 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e is running failed: container process not found" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.318513 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e is running failed: container process not found" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.319019 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e is running failed: container process not found" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.319075 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovnkube-controller" probeResult="unknown" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.377434 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pjxmr_150992a3-efc5-4dc2-a696-390ea843f8c4/ovn-acl-logging/0.log" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.379305 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pjxmr_150992a3-efc5-4dc2-a696-390ea843f8c4/ovn-controller/0.log" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.380130 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.400696 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-netns\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.400827 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-systemd\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.400889 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/150992a3-efc5-4dc2-a696-390ea843f8c4-ovn-node-metrics-cert\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.400892 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.400944 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-slash\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.400989 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-ovn-kubernetes\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401012 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-node-log\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401089 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-slash" (OuterVolumeSpecName: "host-slash") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401137 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401151 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-node-log" (OuterVolumeSpecName: "node-log") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401196 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401234 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401269 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-netd\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401296 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-bin\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401333 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqg2d\" (UniqueName: \"kubernetes.io/projected/150992a3-efc5-4dc2-a696-390ea843f8c4-kube-api-access-xqg2d\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401364 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-systemd-units\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401416 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-var-lib-openvswitch\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401444 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-env-overrides\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401469 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-config\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401554 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-script-lib\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401614 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-kubelet\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401638 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-log-socket\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401755 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-ovn\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401782 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-etc-openvswitch\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.401821 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-openvswitch\") pod \"150992a3-efc5-4dc2-a696-390ea843f8c4\" (UID: \"150992a3-efc5-4dc2-a696-390ea843f8c4\") " Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402358 5113 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402385 5113 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402399 5113 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402415 5113 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402430 5113 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402394 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402425 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402456 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402468 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402435 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402492 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-log-socket" (OuterVolumeSpecName: "log-socket") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402505 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402523 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402539 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402556 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.402803 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.403432 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.407237 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150992a3-efc5-4dc2-a696-390ea843f8c4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.407547 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150992a3-efc5-4dc2-a696-390ea843f8c4-kube-api-access-xqg2d" (OuterVolumeSpecName: "kube-api-access-xqg2d") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "kube-api-access-xqg2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.424265 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "150992a3-efc5-4dc2-a696-390ea843f8c4" (UID: "150992a3-efc5-4dc2-a696-390ea843f8c4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.441509 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fcvkx"] Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442181 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-node" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442208 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-node" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442221 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovnkube-controller" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442230 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovnkube-controller" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442251 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kubecfg-setup" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442258 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kubecfg-setup" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442271 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-controller" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442279 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-controller" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442286 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="sbdb" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442294 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="sbdb" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442311 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-acl-logging" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442318 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-acl-logging" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442330 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442337 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442344 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="nbdb" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442350 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="nbdb" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442362 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="northd" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442367 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="northd" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442471 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-node" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442492 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442505 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-acl-logging" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442519 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="northd" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442530 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovnkube-controller" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442540 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="nbdb" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442548 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="ovn-controller" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.442556 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerName="sbdb" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.451269 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.504969 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-ovnkube-script-lib\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505023 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-run-netns\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505095 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-systemd-units\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505115 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-kubelet\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505138 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-systemd\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505155 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505176 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-cni-bin\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505205 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505229 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-log-socket\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505254 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-ovnkube-config\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505308 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-var-lib-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505329 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm62m\" (UniqueName: \"kubernetes.io/projected/247f31a0-6563-44d8-8645-baf9118572e7-kube-api-access-tm62m\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505354 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-ovn\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505376 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/247f31a0-6563-44d8-8645-baf9118572e7-ovn-node-metrics-cert\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505406 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-etc-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505443 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-node-log\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505463 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-cni-netd\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505495 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505518 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-slash\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505541 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-env-overrides\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505578 5113 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505588 5113 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505596 5113 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505607 5113 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505616 5113 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505626 5113 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505636 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/150992a3-efc5-4dc2-a696-390ea843f8c4-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505645 5113 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505654 5113 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505664 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqg2d\" (UniqueName: \"kubernetes.io/projected/150992a3-efc5-4dc2-a696-390ea843f8c4-kube-api-access-xqg2d\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505673 5113 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505682 5113 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/150992a3-efc5-4dc2-a696-390ea843f8c4-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505690 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505698 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.505706 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/150992a3-efc5-4dc2-a696-390ea843f8c4-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607159 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607247 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-slash\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607278 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-env-overrides\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607301 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-ovnkube-script-lib\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607322 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-run-netns\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607351 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-systemd-units\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607374 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-kubelet\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607399 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-systemd\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607439 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607462 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-cni-bin\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607488 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607517 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-log-socket\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607541 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-ovnkube-config\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607597 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-var-lib-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607618 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tm62m\" (UniqueName: \"kubernetes.io/projected/247f31a0-6563-44d8-8645-baf9118572e7-kube-api-access-tm62m\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607638 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-ovn\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607663 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/247f31a0-6563-44d8-8645-baf9118572e7-ovn-node-metrics-cert\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607691 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-etc-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607735 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-node-log\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607754 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-cni-netd\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607848 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-cni-netd\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607897 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.607926 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-slash\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.608652 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-env-overrides\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609131 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-ovnkube-script-lib\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609176 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-run-netns\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609204 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-systemd-units\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609224 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-kubelet\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609244 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-systemd\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609264 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609285 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-cni-bin\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609307 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609334 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-log-socket\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609727 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/247f31a0-6563-44d8-8645-baf9118572e7-ovnkube-config\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.609766 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-var-lib-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.610104 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-run-ovn\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.610681 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-etc-openvswitch\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.610739 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/247f31a0-6563-44d8-8645-baf9118572e7-node-log\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.615379 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/247f31a0-6563-44d8-8645-baf9118572e7-ovn-node-metrics-cert\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.626721 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm62m\" (UniqueName: \"kubernetes.io/projected/247f31a0-6563-44d8-8645-baf9118572e7-kube-api-access-tm62m\") pod \"ovnkube-node-fcvkx\" (UID: \"247f31a0-6563-44d8-8645-baf9118572e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.695687 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88405869-34c6-458b-ab82-663f9a965335" path="/var/lib/kubelet/pods/88405869-34c6-458b-ab82-663f9a965335/volumes" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.766010 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pjxmr_150992a3-efc5-4dc2-a696-390ea843f8c4/ovn-acl-logging/0.log" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.766510 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pjxmr_150992a3-efc5-4dc2-a696-390ea843f8c4/ovn-controller/0.log" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.766864 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" exitCode=0 Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.766893 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4" exitCode=0 Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.766904 5113 generic.go:358] "Generic (PLEG): container finished" podID="150992a3-efc5-4dc2-a696-390ea843f8c4" containerID="cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353" exitCode=0 Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.767066 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.767091 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.767141 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.767160 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.767173 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pjxmr" event={"ID":"150992a3-efc5-4dc2-a696-390ea843f8c4","Type":"ContainerDied","Data":"bd5de27a5a3deaeb9d36de7de1ec65c20922929eeca215d274ef8ac2a9643bd2"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.767195 5113 scope.go:117] "RemoveContainer" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.770077 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g9mkp_c4621882-3d98-4910-9263-5959d2302427/kube-multus/0.log" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.770175 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g9mkp" event={"ID":"c4621882-3d98-4910-9263-5959d2302427","Type":"ContainerStarted","Data":"bc0499ec95c235ba1774831efada56b6552dda18fafdd8631b555deadcbde66c"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.774891 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.777462 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" event={"ID":"94ea0341-60e0-4bf4-b855-193c43cc6711","Type":"ContainerStarted","Data":"3eb36135a29e9997a9d59239614b0d73e97a464dbb2a0fe7a4fc0b5e15831529"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.777510 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" event={"ID":"94ea0341-60e0-4bf4-b855-193c43cc6711","Type":"ContainerStarted","Data":"f04760f712be1b1fc0a6fb58241f8c74c88afc7bb17ff5a88beba6c3063e1935"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.777527 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" event={"ID":"94ea0341-60e0-4bf4-b855-193c43cc6711","Type":"ContainerStarted","Data":"37641fcf6a933a920dd50ce7ea5ec664287be88776401a8c2383098c589c37cc"} Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.798279 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pjxmr"] Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.806153 5113 scope.go:117] "RemoveContainer" containerID="fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.806789 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pjxmr"] Dec 08 17:52:20 crc kubenswrapper[5113]: W1208 17:52:20.811941 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod247f31a0_6563_44d8_8645_baf9118572e7.slice/crio-d3417bd0404a899268dd275476ec5d747708562ddfbc1a868593c0a7d9b3c7ef WatchSource:0}: Error finding container d3417bd0404a899268dd275476ec5d747708562ddfbc1a868593c0a7d9b3c7ef: Status 404 returned error can't find the container with id d3417bd0404a899268dd275476ec5d747708562ddfbc1a868593c0a7d9b3c7ef Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.854122 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-pnnm2" podStartSLOduration=1.8540988170000001 podStartE2EDuration="1.854098817s" podCreationTimestamp="2025-12-08 17:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:52:20.853108592 +0000 UTC m=+706.568901708" watchObservedRunningTime="2025-12-08 17:52:20.854098817 +0000 UTC m=+706.569891933" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.859520 5113 scope.go:117] "RemoveContainer" containerID="1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.882629 5113 scope.go:117] "RemoveContainer" containerID="cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.897139 5113 scope.go:117] "RemoveContainer" containerID="6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.931460 5113 scope.go:117] "RemoveContainer" containerID="b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.945131 5113 scope.go:117] "RemoveContainer" containerID="39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.959570 5113 scope.go:117] "RemoveContainer" containerID="3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.975306 5113 scope.go:117] "RemoveContainer" containerID="d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.987941 5113 scope.go:117] "RemoveContainer" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.988842 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e\": container with ID starting with 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e not found: ID does not exist" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.988891 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e"} err="failed to get container status \"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e\": rpc error: code = NotFound desc = could not find container \"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e\": container with ID starting with 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.988921 5113 scope.go:117] "RemoveContainer" containerID="fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.990006 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4\": container with ID starting with fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4 not found: ID does not exist" containerID="fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.990075 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4"} err="failed to get container status \"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4\": rpc error: code = NotFound desc = could not find container \"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4\": container with ID starting with fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.990107 5113 scope.go:117] "RemoveContainer" containerID="1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.990540 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576\": container with ID starting with 1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576 not found: ID does not exist" containerID="1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.990579 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576"} err="failed to get container status \"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576\": rpc error: code = NotFound desc = could not find container \"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576\": container with ID starting with 1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.990608 5113 scope.go:117] "RemoveContainer" containerID="cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.990930 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353\": container with ID starting with cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353 not found: ID does not exist" containerID="cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.991001 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353"} err="failed to get container status \"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353\": rpc error: code = NotFound desc = could not find container \"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353\": container with ID starting with cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.991091 5113 scope.go:117] "RemoveContainer" containerID="6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.991452 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef\": container with ID starting with 6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef not found: ID does not exist" containerID="6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.991477 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef"} err="failed to get container status \"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef\": rpc error: code = NotFound desc = could not find container \"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef\": container with ID starting with 6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.991492 5113 scope.go:117] "RemoveContainer" containerID="b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.991746 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20\": container with ID starting with b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20 not found: ID does not exist" containerID="b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.991772 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20"} err="failed to get container status \"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20\": rpc error: code = NotFound desc = could not find container \"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20\": container with ID starting with b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.991789 5113 scope.go:117] "RemoveContainer" containerID="39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.992051 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade\": container with ID starting with 39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade not found: ID does not exist" containerID="39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.992092 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade"} err="failed to get container status \"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade\": rpc error: code = NotFound desc = could not find container \"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade\": container with ID starting with 39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.992112 5113 scope.go:117] "RemoveContainer" containerID="3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.992602 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633\": container with ID starting with 3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633 not found: ID does not exist" containerID="3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.992640 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633"} err="failed to get container status \"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633\": rpc error: code = NotFound desc = could not find container \"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633\": container with ID starting with 3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.992664 5113 scope.go:117] "RemoveContainer" containerID="d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f" Dec 08 17:52:20 crc kubenswrapper[5113]: E1208 17:52:20.992956 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f\": container with ID starting with d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f not found: ID does not exist" containerID="d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.992983 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f"} err="failed to get container status \"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f\": rpc error: code = NotFound desc = could not find container \"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f\": container with ID starting with d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.993000 5113 scope.go:117] "RemoveContainer" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.993284 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e"} err="failed to get container status \"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e\": rpc error: code = NotFound desc = could not find container \"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e\": container with ID starting with 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.993312 5113 scope.go:117] "RemoveContainer" containerID="fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.993542 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4"} err="failed to get container status \"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4\": rpc error: code = NotFound desc = could not find container \"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4\": container with ID starting with fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.993564 5113 scope.go:117] "RemoveContainer" containerID="1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.993861 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576"} err="failed to get container status \"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576\": rpc error: code = NotFound desc = could not find container \"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576\": container with ID starting with 1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.993882 5113 scope.go:117] "RemoveContainer" containerID="cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.994110 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353"} err="failed to get container status \"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353\": rpc error: code = NotFound desc = could not find container \"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353\": container with ID starting with cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.994140 5113 scope.go:117] "RemoveContainer" containerID="6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.994385 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef"} err="failed to get container status \"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef\": rpc error: code = NotFound desc = could not find container \"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef\": container with ID starting with 6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.994406 5113 scope.go:117] "RemoveContainer" containerID="b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.994647 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20"} err="failed to get container status \"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20\": rpc error: code = NotFound desc = could not find container \"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20\": container with ID starting with b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.994665 5113 scope.go:117] "RemoveContainer" containerID="39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.994966 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade"} err="failed to get container status \"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade\": rpc error: code = NotFound desc = could not find container \"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade\": container with ID starting with 39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.995049 5113 scope.go:117] "RemoveContainer" containerID="3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.995253 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633"} err="failed to get container status \"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633\": rpc error: code = NotFound desc = could not find container \"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633\": container with ID starting with 3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.995314 5113 scope.go:117] "RemoveContainer" containerID="d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.995569 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f"} err="failed to get container status \"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f\": rpc error: code = NotFound desc = could not find container \"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f\": container with ID starting with d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.995603 5113 scope.go:117] "RemoveContainer" containerID="6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.995888 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e"} err="failed to get container status \"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e\": rpc error: code = NotFound desc = could not find container \"6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e\": container with ID starting with 6b60972b52d5e7c1b0aa47930aeb5aed3fd27a6fee0632be1e05721a6b727a0e not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.995915 5113 scope.go:117] "RemoveContainer" containerID="fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.996194 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4"} err="failed to get container status \"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4\": rpc error: code = NotFound desc = could not find container \"fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4\": container with ID starting with fc6ab8ca21d78d4cc8264f0890cf7cde25b2de4f67bb8888206b9c644fac5fa4 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.996220 5113 scope.go:117] "RemoveContainer" containerID="1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.996471 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576"} err="failed to get container status \"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576\": rpc error: code = NotFound desc = could not find container \"1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576\": container with ID starting with 1aa12ba671be54145a13c20839cf44913ffb9be8fe576469a48eb60f2c171576 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.996498 5113 scope.go:117] "RemoveContainer" containerID="cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.996785 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353"} err="failed to get container status \"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353\": rpc error: code = NotFound desc = could not find container \"cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353\": container with ID starting with cd593bf9961b955d9d209cffe4c05faea15679a0df9d8b9086a71a2c70f33353 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.996807 5113 scope.go:117] "RemoveContainer" containerID="6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997026 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef"} err="failed to get container status \"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef\": rpc error: code = NotFound desc = could not find container \"6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef\": container with ID starting with 6aab46e26de537969ce78e8806ff6b40070192322b9987fb2b0e7e3964f13cef not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997062 5113 scope.go:117] "RemoveContainer" containerID="b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997299 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20"} err="failed to get container status \"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20\": rpc error: code = NotFound desc = could not find container \"b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20\": container with ID starting with b5ed4c806bff078d0a50267d1a43cd4a05a632c20d42eae5404b0c4985b5dc20 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997331 5113 scope.go:117] "RemoveContainer" containerID="39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997547 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade"} err="failed to get container status \"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade\": rpc error: code = NotFound desc = could not find container \"39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade\": container with ID starting with 39bd403cb63c15b30d995764bee3aef6787056974647474abf1ef1b16c73eade not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997571 5113 scope.go:117] "RemoveContainer" containerID="3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997814 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633"} err="failed to get container status \"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633\": rpc error: code = NotFound desc = could not find container \"3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633\": container with ID starting with 3cc2d291804ccb10327245da2d2ac283fd6b9b990e4cbf68b7093c408323f633 not found: ID does not exist" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.997839 5113 scope.go:117] "RemoveContainer" containerID="d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f" Dec 08 17:52:20 crc kubenswrapper[5113]: I1208 17:52:20.998214 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f"} err="failed to get container status \"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f\": rpc error: code = NotFound desc = could not find container \"d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f\": container with ID starting with d81a45d6bfc07a167eaa096d0353dc8fa30b0916e5765722fbece87f5d0b4b9f not found: ID does not exist" Dec 08 17:52:21 crc kubenswrapper[5113]: I1208 17:52:21.788155 5113 generic.go:358] "Generic (PLEG): container finished" podID="247f31a0-6563-44d8-8645-baf9118572e7" containerID="bd1cb60ffa8de8f2c49f46978e5e04df8f50a7937f6a1844bae91c75458cff9d" exitCode=0 Dec 08 17:52:21 crc kubenswrapper[5113]: I1208 17:52:21.788250 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerDied","Data":"bd1cb60ffa8de8f2c49f46978e5e04df8f50a7937f6a1844bae91c75458cff9d"} Dec 08 17:52:21 crc kubenswrapper[5113]: I1208 17:52:21.788642 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"d3417bd0404a899268dd275476ec5d747708562ddfbc1a868593c0a7d9b3c7ef"} Dec 08 17:52:22 crc kubenswrapper[5113]: I1208 17:52:22.692733 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150992a3-efc5-4dc2-a696-390ea843f8c4" path="/var/lib/kubelet/pods/150992a3-efc5-4dc2-a696-390ea843f8c4/volumes" Dec 08 17:52:22 crc kubenswrapper[5113]: I1208 17:52:22.798776 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"503812733d55d0c63983262598cf634a97d692d1823be36b387607204efea0cb"} Dec 08 17:52:22 crc kubenswrapper[5113]: I1208 17:52:22.798839 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"ce984cca69a43c3899d2f6096e652056ec116bb6e84519644b680e02fe0632a8"} Dec 08 17:52:22 crc kubenswrapper[5113]: I1208 17:52:22.798851 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"c26c463f1dd07ccf25b38236e9fb8d621ddabd52a5b57437b10d76ea899bc639"} Dec 08 17:52:22 crc kubenswrapper[5113]: I1208 17:52:22.798860 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"aaebd844691f40e4a0bde7b5ef9cbbebbf8fadd1e8474999507657d5215963d9"} Dec 08 17:52:22 crc kubenswrapper[5113]: I1208 17:52:22.798870 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"6dfc34f4e80802a130ba910004605eef5771f79a5dc0c833924407207b4dc54d"} Dec 08 17:52:22 crc kubenswrapper[5113]: I1208 17:52:22.798885 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"99725b439475a7d1eaeb4c952b84a612d5ec895f0ef29c3347c84fbad01f5d45"} Dec 08 17:52:25 crc kubenswrapper[5113]: I1208 17:52:25.819606 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"09553e6263561f3b28f31932372d1e26f0a89d91d43b307b96b6f88bbb49ebd8"} Dec 08 17:52:27 crc kubenswrapper[5113]: I1208 17:52:27.835348 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" event={"ID":"247f31a0-6563-44d8-8645-baf9118572e7","Type":"ContainerStarted","Data":"b29c622f9a586c0a083b574e8aa321c7f6d9fa4365fa4b2031ac1a0a135fbfdc"} Dec 08 17:52:27 crc kubenswrapper[5113]: I1208 17:52:27.835972 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:27 crc kubenswrapper[5113]: I1208 17:52:27.836001 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:27 crc kubenswrapper[5113]: I1208 17:52:27.867684 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:27 crc kubenswrapper[5113]: I1208 17:52:27.872269 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" podStartSLOduration=7.872244295 podStartE2EDuration="7.872244295s" podCreationTimestamp="2025-12-08 17:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:52:27.86775894 +0000 UTC m=+713.583552086" watchObservedRunningTime="2025-12-08 17:52:27.872244295 +0000 UTC m=+713.588037411" Dec 08 17:52:28 crc kubenswrapper[5113]: I1208 17:52:28.841666 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:52:28 crc kubenswrapper[5113]: I1208 17:52:28.875283 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:53:00 crc kubenswrapper[5113]: I1208 17:53:00.879713 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fcvkx" Dec 08 17:53:23 crc kubenswrapper[5113]: I1208 17:53:23.255821 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:53:23 crc kubenswrapper[5113]: I1208 17:53:23.256646 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:53:29 crc kubenswrapper[5113]: I1208 17:53:29.708346 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqqpz"] Dec 08 17:53:29 crc kubenswrapper[5113]: I1208 17:53:29.709181 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hqqpz" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="registry-server" containerID="cri-o://2e748e32b337bde74f69f3b539a6c00242e63f5b76e33ce1ed8e5dff11349061" gracePeriod=30 Dec 08 17:53:30 crc kubenswrapper[5113]: I1208 17:53:30.774961 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-fl5l8"] Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.051753 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-fl5l8"] Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.051963 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.186680 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8kl\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-kube-api-access-ls8kl\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.187008 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6bf92e4-8074-4bf6-93db-80be9d238b74-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.187139 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6bf92e4-8074-4bf6-93db-80be9d238b74-registry-certificates\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.187217 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6bf92e4-8074-4bf6-93db-80be9d238b74-trusted-ca\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.187305 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.187521 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-registry-tls\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.187581 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6bf92e4-8074-4bf6-93db-80be9d238b74-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.187660 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-bound-sa-token\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.218932 5113 generic.go:358] "Generic (PLEG): container finished" podID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerID="2e748e32b337bde74f69f3b539a6c00242e63f5b76e33ce1ed8e5dff11349061" exitCode=0 Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.219001 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqqpz" event={"ID":"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef","Type":"ContainerDied","Data":"2e748e32b337bde74f69f3b539a6c00242e63f5b76e33ce1ed8e5dff11349061"} Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.236993 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.288858 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6bf92e4-8074-4bf6-93db-80be9d238b74-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.288921 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6bf92e4-8074-4bf6-93db-80be9d238b74-registry-certificates\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.288944 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6bf92e4-8074-4bf6-93db-80be9d238b74-trusted-ca\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.288974 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-registry-tls\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.289443 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a6bf92e4-8074-4bf6-93db-80be9d238b74-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.289487 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6bf92e4-8074-4bf6-93db-80be9d238b74-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.289516 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-bound-sa-token\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.289557 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ls8kl\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-kube-api-access-ls8kl\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.290676 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a6bf92e4-8074-4bf6-93db-80be9d238b74-registry-certificates\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.290891 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6bf92e4-8074-4bf6-93db-80be9d238b74-trusted-ca\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.297076 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-registry-tls\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.297075 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a6bf92e4-8074-4bf6-93db-80be9d238b74-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.312632 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls8kl\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-kube-api-access-ls8kl\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.312874 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6bf92e4-8074-4bf6-93db-80be9d238b74-bound-sa-token\") pod \"image-registry-5d9d95bf5b-fl5l8\" (UID: \"a6bf92e4-8074-4bf6-93db-80be9d238b74\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.404138 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.644312 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-fl5l8"] Dec 08 17:53:31 crc kubenswrapper[5113]: I1208 17:53:31.874439 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.001876 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-utilities\") pod \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.001964 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-catalog-content\") pod \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.002117 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phmq9\" (UniqueName: \"kubernetes.io/projected/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-kube-api-access-phmq9\") pod \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\" (UID: \"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef\") " Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.004196 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-utilities" (OuterVolumeSpecName: "utilities") pod "810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" (UID: "810a9eee-e4c3-47af-a4f9-a02c0db8c1ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.010778 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-kube-api-access-phmq9" (OuterVolumeSpecName: "kube-api-access-phmq9") pod "810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" (UID: "810a9eee-e4c3-47af-a4f9-a02c0db8c1ef"). InnerVolumeSpecName "kube-api-access-phmq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.013943 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" (UID: "810a9eee-e4c3-47af-a4f9-a02c0db8c1ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.104268 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.104321 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phmq9\" (UniqueName: \"kubernetes.io/projected/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-kube-api-access-phmq9\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.104336 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.229505 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqqpz" event={"ID":"810a9eee-e4c3-47af-a4f9-a02c0db8c1ef","Type":"ContainerDied","Data":"13c7f2da6f4e58e9090c055eae2029749017dc6c2f3789f0360a6972404ff3ae"} Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.229592 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqqpz" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.229597 5113 scope.go:117] "RemoveContainer" containerID="2e748e32b337bde74f69f3b539a6c00242e63f5b76e33ce1ed8e5dff11349061" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.232410 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" event={"ID":"a6bf92e4-8074-4bf6-93db-80be9d238b74","Type":"ContainerStarted","Data":"2349a4f814e3b60c319759a236a92760fe9da4aa94fd8d6febafaec91eb3870a"} Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.232473 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" event={"ID":"a6bf92e4-8074-4bf6-93db-80be9d238b74","Type":"ContainerStarted","Data":"7ea608be40685755d304100216404f9fbc60ae3e2b2281bb13cea332530ea9f1"} Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.232601 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.264662 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" podStartSLOduration=2.264638057 podStartE2EDuration="2.264638057s" podCreationTimestamp="2025-12-08 17:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:53:32.255651416 +0000 UTC m=+777.971444532" watchObservedRunningTime="2025-12-08 17:53:32.264638057 +0000 UTC m=+777.980431173" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.267321 5113 scope.go:117] "RemoveContainer" containerID="d95266da31d294a8e5f9e8f0e8e80853a69e04ea4c5ccd29b03bfbd4071801a2" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.286690 5113 scope.go:117] "RemoveContainer" containerID="0f8b6cb4beb47ae8d32f0e5188307554ba28bea440b1f9471e280e7693117dca" Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.292242 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqqpz"] Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.303085 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqqpz"] Dec 08 17:53:32 crc kubenswrapper[5113]: I1208 17:53:32.689307 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" path="/var/lib/kubelet/pods/810a9eee-e4c3-47af-a4f9-a02c0db8c1ef/volumes" Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.657387 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb"] Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.658364 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="extract-utilities" Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.658383 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="extract-utilities" Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.658417 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="registry-server" Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.658424 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="registry-server" Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.658431 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="extract-content" Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.658438 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="extract-content" Dec 08 17:53:33 crc kubenswrapper[5113]: I1208 17:53:33.658543 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="810a9eee-e4c3-47af-a4f9-a02c0db8c1ef" containerName="registry-server" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.160097 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb"] Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.160229 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.172704 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.273790 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb6ht\" (UniqueName: \"kubernetes.io/projected/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-kube-api-access-pb6ht\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.273877 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.273962 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.375745 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pb6ht\" (UniqueName: \"kubernetes.io/projected/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-kube-api-access-pb6ht\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.375814 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.375982 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.376448 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.376491 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.402366 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb6ht\" (UniqueName: \"kubernetes.io/projected/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-kube-api-access-pb6ht\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.484597 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:34 crc kubenswrapper[5113]: I1208 17:53:34.762773 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb"] Dec 08 17:53:34 crc kubenswrapper[5113]: W1208 17:53:34.774587 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f0cab9_1a54_4eeb_9db9_ca20bb82de45.slice/crio-06a157ac334772874d29a65596d252bb723c4db981969781c0cca4f62ed3091f WatchSource:0}: Error finding container 06a157ac334772874d29a65596d252bb723c4db981969781c0cca4f62ed3091f: Status 404 returned error can't find the container with id 06a157ac334772874d29a65596d252bb723c4db981969781c0cca4f62ed3091f Dec 08 17:53:35 crc kubenswrapper[5113]: I1208 17:53:35.252921 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" event={"ID":"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45","Type":"ContainerStarted","Data":"06a157ac334772874d29a65596d252bb723c4db981969781c0cca4f62ed3091f"} Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.260742 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" event={"ID":"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45","Type":"ContainerStarted","Data":"f8d9611eda6bfc0c2777bf0f1531b0cde913ab2396ba1cfa18f8eb733888e308"} Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.419516 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4lmf"] Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.656700 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4lmf"] Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.656908 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.819594 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-catalog-content\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.820113 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2f4k\" (UniqueName: \"kubernetes.io/projected/44406118-eeb7-4eba-b3d1-01873c372290-kube-api-access-l2f4k\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.820364 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-utilities\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.921473 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l2f4k\" (UniqueName: \"kubernetes.io/projected/44406118-eeb7-4eba-b3d1-01873c372290-kube-api-access-l2f4k\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.921995 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-utilities\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.922583 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-utilities\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.922620 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-catalog-content\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.922631 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-catalog-content\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.946138 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2f4k\" (UniqueName: \"kubernetes.io/projected/44406118-eeb7-4eba-b3d1-01873c372290-kube-api-access-l2f4k\") pod \"redhat-operators-s4lmf\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:36 crc kubenswrapper[5113]: I1208 17:53:36.972630 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:37 crc kubenswrapper[5113]: I1208 17:53:37.267966 5113 generic.go:358] "Generic (PLEG): container finished" podID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerID="f8d9611eda6bfc0c2777bf0f1531b0cde913ab2396ba1cfa18f8eb733888e308" exitCode=0 Dec 08 17:53:37 crc kubenswrapper[5113]: I1208 17:53:37.268076 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" event={"ID":"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45","Type":"ContainerDied","Data":"f8d9611eda6bfc0c2777bf0f1531b0cde913ab2396ba1cfa18f8eb733888e308"} Dec 08 17:53:37 crc kubenswrapper[5113]: I1208 17:53:37.500728 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4lmf"] Dec 08 17:53:38 crc kubenswrapper[5113]: I1208 17:53:38.277348 5113 generic.go:358] "Generic (PLEG): container finished" podID="44406118-eeb7-4eba-b3d1-01873c372290" containerID="575a37d8b91a6d11e7c59291962b338893a4a6b131f1cdb00c2463d57d92db80" exitCode=0 Dec 08 17:53:38 crc kubenswrapper[5113]: I1208 17:53:38.277551 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4lmf" event={"ID":"44406118-eeb7-4eba-b3d1-01873c372290","Type":"ContainerDied","Data":"575a37d8b91a6d11e7c59291962b338893a4a6b131f1cdb00c2463d57d92db80"} Dec 08 17:53:38 crc kubenswrapper[5113]: I1208 17:53:38.277968 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4lmf" event={"ID":"44406118-eeb7-4eba-b3d1-01873c372290","Type":"ContainerStarted","Data":"a47161ecc2c19ecaf1563c73084e59ecdbd8df2cc2ed1f5d7316478d0be68e81"} Dec 08 17:53:40 crc kubenswrapper[5113]: I1208 17:53:40.723716 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf"] Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.175495 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf"] Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.175963 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2"] Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.175717 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.185486 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2"] Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.185649 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.213488 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.213687 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fzlg\" (UniqueName: \"kubernetes.io/projected/dc7fbeca-a482-46cb-b595-bdba3db567f5-kube-api-access-9fzlg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.213718 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.213750 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.213782 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dndq\" (UniqueName: \"kubernetes.io/projected/b20573fb-1782-4e38-98e2-ba89edd97c4f-kube-api-access-9dndq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.213863 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.315614 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.315696 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.315759 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9fzlg\" (UniqueName: \"kubernetes.io/projected/dc7fbeca-a482-46cb-b595-bdba3db567f5-kube-api-access-9fzlg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.316458 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.316548 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.316654 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.316812 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9dndq\" (UniqueName: \"kubernetes.io/projected/b20573fb-1782-4e38-98e2-ba89edd97c4f-kube-api-access-9dndq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.317120 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.317729 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.318267 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.345050 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fzlg\" (UniqueName: \"kubernetes.io/projected/dc7fbeca-a482-46cb-b595-bdba3db567f5-kube-api-access-9fzlg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.345558 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dndq\" (UniqueName: \"kubernetes.io/projected/b20573fb-1782-4e38-98e2-ba89edd97c4f-kube-api-access-9dndq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.506871 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.511412 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.656303 5113 generic.go:358] "Generic (PLEG): container finished" podID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerID="dd0bdd01f5314d006f376c71247e07463279a2b6eef4709b72dbff1effefbd5a" exitCode=0 Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.656620 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" event={"ID":"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45","Type":"ContainerDied","Data":"dd0bdd01f5314d006f376c71247e07463279a2b6eef4709b72dbff1effefbd5a"} Dec 08 17:53:42 crc kubenswrapper[5113]: I1208 17:53:42.661947 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4lmf" event={"ID":"44406118-eeb7-4eba-b3d1-01873c372290","Type":"ContainerStarted","Data":"7e5efc8a2952e2e8af2eac282bddc26b7afc51b372744d30a05c5cc4fb96d3b2"} Dec 08 17:53:43 crc kubenswrapper[5113]: I1208 17:53:43.056698 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf"] Dec 08 17:53:43 crc kubenswrapper[5113]: I1208 17:53:43.123656 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2"] Dec 08 17:53:43 crc kubenswrapper[5113]: I1208 17:53:43.677148 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" event={"ID":"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45","Type":"ContainerStarted","Data":"54a427860486dce83a3a69e1cdbfea7b9e82cab9d7e2ae621fb85cd3d08040f1"} Dec 08 17:53:43 crc kubenswrapper[5113]: I1208 17:53:43.680962 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" event={"ID":"b20573fb-1782-4e38-98e2-ba89edd97c4f","Type":"ContainerStarted","Data":"a2f3e0cbac915262d3b490c32440fe905c78d5fcddf1fe91eb676bac973390d7"} Dec 08 17:53:43 crc kubenswrapper[5113]: I1208 17:53:43.684449 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerStarted","Data":"261efc73ff21f71fb157ad858c73cb1b47f3b0ced9c189a395e773347ed7b1e2"} Dec 08 17:53:43 crc kubenswrapper[5113]: I1208 17:53:43.684507 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerStarted","Data":"3b43c1f7bf768a8014bbc4e6081e4ef2f542bb04083f186fe2f44796bbe51dbe"} Dec 08 17:53:43 crc kubenswrapper[5113]: I1208 17:53:43.792598 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" podStartSLOduration=5.794690504 podStartE2EDuration="10.792570082s" podCreationTimestamp="2025-12-08 17:53:33 +0000 UTC" firstStartedPulling="2025-12-08 17:53:37.269894566 +0000 UTC m=+782.985687682" lastFinishedPulling="2025-12-08 17:53:42.267774144 +0000 UTC m=+787.983567260" observedRunningTime="2025-12-08 17:53:43.788997461 +0000 UTC m=+789.504790607" watchObservedRunningTime="2025-12-08 17:53:43.792570082 +0000 UTC m=+789.508363198" Dec 08 17:53:44 crc kubenswrapper[5113]: I1208 17:53:44.707993 5113 generic.go:358] "Generic (PLEG): container finished" podID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerID="54a427860486dce83a3a69e1cdbfea7b9e82cab9d7e2ae621fb85cd3d08040f1" exitCode=0 Dec 08 17:53:44 crc kubenswrapper[5113]: I1208 17:53:44.708967 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" event={"ID":"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45","Type":"ContainerDied","Data":"54a427860486dce83a3a69e1cdbfea7b9e82cab9d7e2ae621fb85cd3d08040f1"} Dec 08 17:53:44 crc kubenswrapper[5113]: I1208 17:53:44.712615 5113 generic.go:358] "Generic (PLEG): container finished" podID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerID="ab91be19e64fe2491c2da92fe751c3c47cc3c1769510b6183196259c53c7b347" exitCode=0 Dec 08 17:53:44 crc kubenswrapper[5113]: I1208 17:53:44.712706 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" event={"ID":"b20573fb-1782-4e38-98e2-ba89edd97c4f","Type":"ContainerDied","Data":"ab91be19e64fe2491c2da92fe751c3c47cc3c1769510b6183196259c53c7b347"} Dec 08 17:53:44 crc kubenswrapper[5113]: I1208 17:53:44.715670 5113 generic.go:358] "Generic (PLEG): container finished" podID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerID="261efc73ff21f71fb157ad858c73cb1b47f3b0ced9c189a395e773347ed7b1e2" exitCode=0 Dec 08 17:53:44 crc kubenswrapper[5113]: I1208 17:53:44.716303 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerDied","Data":"261efc73ff21f71fb157ad858c73cb1b47f3b0ced9c189a395e773347ed7b1e2"} Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.436259 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lvnnn"] Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.638831 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lvnnn"] Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.639161 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.826254 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-utilities\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.826303 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vnn\" (UniqueName: \"kubernetes.io/projected/b72b075a-b742-4c30-ba4e-e8a31e8b7872-kube-api-access-w7vnn\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.826331 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-catalog-content\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.928334 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-utilities\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.928409 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w7vnn\" (UniqueName: \"kubernetes.io/projected/b72b075a-b742-4c30-ba4e-e8a31e8b7872-kube-api-access-w7vnn\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.928431 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-catalog-content\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.929339 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-catalog-content\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.929385 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-utilities\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:45 crc kubenswrapper[5113]: I1208 17:53:45.967581 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7vnn\" (UniqueName: \"kubernetes.io/projected/b72b075a-b742-4c30-ba4e-e8a31e8b7872-kube-api-access-w7vnn\") pod \"certified-operators-lvnnn\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.259389 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.273680 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.357570 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb6ht\" (UniqueName: \"kubernetes.io/projected/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-kube-api-access-pb6ht\") pod \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.357665 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-bundle\") pod \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.361864 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-bundle" (OuterVolumeSpecName: "bundle") pod "f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" (UID: "f1f0cab9-1a54-4eeb-9db9-ca20bb82de45"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.377729 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-kube-api-access-pb6ht" (OuterVolumeSpecName: "kube-api-access-pb6ht") pod "f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" (UID: "f1f0cab9-1a54-4eeb-9db9-ca20bb82de45"). InnerVolumeSpecName "kube-api-access-pb6ht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.458731 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-util\") pod \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\" (UID: \"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45\") " Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.458958 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pb6ht\" (UniqueName: \"kubernetes.io/projected/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-kube-api-access-pb6ht\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.458973 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.469808 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-util" (OuterVolumeSpecName: "util") pod "f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" (UID: "f1f0cab9-1a54-4eeb-9db9-ca20bb82de45"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.560324 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f1f0cab9-1a54-4eeb-9db9-ca20bb82de45-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.708314 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lvnnn"] Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.814473 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" event={"ID":"f1f0cab9-1a54-4eeb-9db9-ca20bb82de45","Type":"ContainerDied","Data":"06a157ac334772874d29a65596d252bb723c4db981969781c0cca4f62ed3091f"} Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.814931 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a157ac334772874d29a65596d252bb723c4db981969781c0cca4f62ed3091f" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.814623 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rcmsb" Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.817341 5113 generic.go:358] "Generic (PLEG): container finished" podID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerID="3d647109487a4f0f366690dbe8b4fa7246b6f9f0560995f56f4cd95d5ef98264" exitCode=0 Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.817420 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" event={"ID":"b20573fb-1782-4e38-98e2-ba89edd97c4f","Type":"ContainerDied","Data":"3d647109487a4f0f366690dbe8b4fa7246b6f9f0560995f56f4cd95d5ef98264"} Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.821920 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerStarted","Data":"1dc3bc63211ee48958f02268e6b31874df977a68e9c80be3a5c6fdc9ea42eca0"} Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.825162 5113 generic.go:358] "Generic (PLEG): container finished" podID="44406118-eeb7-4eba-b3d1-01873c372290" containerID="7e5efc8a2952e2e8af2eac282bddc26b7afc51b372744d30a05c5cc4fb96d3b2" exitCode=0 Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.825268 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4lmf" event={"ID":"44406118-eeb7-4eba-b3d1-01873c372290","Type":"ContainerDied","Data":"7e5efc8a2952e2e8af2eac282bddc26b7afc51b372744d30a05c5cc4fb96d3b2"} Dec 08 17:53:46 crc kubenswrapper[5113]: I1208 17:53:46.827124 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvnnn" event={"ID":"b72b075a-b742-4c30-ba4e-e8a31e8b7872","Type":"ContainerStarted","Data":"d88c460c99491d1cfe9785eba3d05829f4ab310bca149f5db5abd8091758e0bc"} Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.843061 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4lmf" event={"ID":"44406118-eeb7-4eba-b3d1-01873c372290","Type":"ContainerStarted","Data":"bc5ca72d7a4fe4364f7a566c5383d4f94b45e82b5117f15e3ec73be8b078d1cb"} Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.848900 5113 generic.go:358] "Generic (PLEG): container finished" podID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerID="662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f" exitCode=0 Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.849006 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvnnn" event={"ID":"b72b075a-b742-4c30-ba4e-e8a31e8b7872","Type":"ContainerDied","Data":"662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f"} Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.859904 5113 generic.go:358] "Generic (PLEG): container finished" podID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerID="f2e934c547e94642cd5d3ce56db305eada2c756ce410d4df7420e75f1ef86262" exitCode=0 Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.860177 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" event={"ID":"b20573fb-1782-4e38-98e2-ba89edd97c4f","Type":"ContainerDied","Data":"f2e934c547e94642cd5d3ce56db305eada2c756ce410d4df7420e75f1ef86262"} Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.870179 5113 generic.go:358] "Generic (PLEG): container finished" podID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerID="1dc3bc63211ee48958f02268e6b31874df977a68e9c80be3a5c6fdc9ea42eca0" exitCode=0 Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.870339 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerDied","Data":"1dc3bc63211ee48958f02268e6b31874df977a68e9c80be3a5c6fdc9ea42eca0"} Dec 08 17:53:47 crc kubenswrapper[5113]: I1208 17:53:47.881833 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4lmf" podStartSLOduration=7.892186603 podStartE2EDuration="11.881809018s" podCreationTimestamp="2025-12-08 17:53:36 +0000 UTC" firstStartedPulling="2025-12-08 17:53:38.278483877 +0000 UTC m=+783.994276993" lastFinishedPulling="2025-12-08 17:53:42.268106292 +0000 UTC m=+787.983899408" observedRunningTime="2025-12-08 17:53:47.874220065 +0000 UTC m=+793.590013211" watchObservedRunningTime="2025-12-08 17:53:47.881809018 +0000 UTC m=+793.597602134" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.103670 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59"] Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.105501 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerName="util" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.105551 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerName="util" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.105587 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerName="extract" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.105595 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerName="extract" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.105696 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerName="pull" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.105709 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerName="pull" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.106816 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1f0cab9-1a54-4eeb-9db9-ca20bb82de45" containerName="extract" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.157526 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.214068 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-bundle\") pod \"b20573fb-1782-4e38-98e2-ba89edd97c4f\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.214117 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dndq\" (UniqueName: \"kubernetes.io/projected/b20573fb-1782-4e38-98e2-ba89edd97c4f-kube-api-access-9dndq\") pod \"b20573fb-1782-4e38-98e2-ba89edd97c4f\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.214208 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-util\") pod \"b20573fb-1782-4e38-98e2-ba89edd97c4f\" (UID: \"b20573fb-1782-4e38-98e2-ba89edd97c4f\") " Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.215065 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-bundle" (OuterVolumeSpecName: "bundle") pod "b20573fb-1782-4e38-98e2-ba89edd97c4f" (UID: "b20573fb-1782-4e38-98e2-ba89edd97c4f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.222819 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20573fb-1782-4e38-98e2-ba89edd97c4f-kube-api-access-9dndq" (OuterVolumeSpecName: "kube-api-access-9dndq") pod "b20573fb-1782-4e38-98e2-ba89edd97c4f" (UID: "b20573fb-1782-4e38-98e2-ba89edd97c4f"). InnerVolumeSpecName "kube-api-access-9dndq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.316086 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:49 crc kubenswrapper[5113]: I1208 17:53:49.316153 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9dndq\" (UniqueName: \"kubernetes.io/projected/b20573fb-1782-4e38-98e2-ba89edd97c4f-kube-api-access-9dndq\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.146306 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-util" (OuterVolumeSpecName: "util") pod "b20573fb-1782-4e38-98e2-ba89edd97c4f" (UID: "b20573fb-1782-4e38-98e2-ba89edd97c4f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.231806 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b20573fb-1782-4e38-98e2-ba89edd97c4f-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.659992 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" event={"ID":"b20573fb-1782-4e38-98e2-ba89edd97c4f","Type":"ContainerDied","Data":"a2f3e0cbac915262d3b490c32440fe905c78d5fcddf1fe91eb676bac973390d7"} Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.660492 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f3e0cbac915262d3b490c32440fe905c78d5fcddf1fe91eb676bac973390d7" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.660511 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59"] Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.660145 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgnrc2" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.661463 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.739878 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.741453 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.741531 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6rd\" (UniqueName: \"kubernetes.io/projected/82a50f23-a9ba-4f40-ae26-112badc78302-kube-api-access-4s6rd\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.843234 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.843353 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.843397 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6rd\" (UniqueName: \"kubernetes.io/projected/82a50f23-a9ba-4f40-ae26-112badc78302-kube-api-access-4s6rd\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.843822 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.843865 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.864939 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6rd\" (UniqueName: \"kubernetes.io/projected/82a50f23-a9ba-4f40-ae26-112badc78302-kube-api-access-4s6rd\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.895727 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerStarted","Data":"42f2d9b7dc0e9654882459e61c6fc722453560135e56b5f9196e4c6afe7d2760"} Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.927104 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" podStartSLOduration=9.157464154 podStartE2EDuration="10.927073075s" podCreationTimestamp="2025-12-08 17:53:40 +0000 UTC" firstStartedPulling="2025-12-08 17:53:44.71726029 +0000 UTC m=+790.433053406" lastFinishedPulling="2025-12-08 17:53:46.486869201 +0000 UTC m=+792.202662327" observedRunningTime="2025-12-08 17:53:50.923251887 +0000 UTC m=+796.639045013" watchObservedRunningTime="2025-12-08 17:53:50.927073075 +0000 UTC m=+796.642866211" Dec 08 17:53:50 crc kubenswrapper[5113]: I1208 17:53:50.982873 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:53:51 crc kubenswrapper[5113]: W1208 17:53:51.336386 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82a50f23_a9ba_4f40_ae26_112badc78302.slice/crio-71324049004fe36c52159953648258ff1a350640bae548525d2cdb5e6b42dcdb WatchSource:0}: Error finding container 71324049004fe36c52159953648258ff1a350640bae548525d2cdb5e6b42dcdb: Status 404 returned error can't find the container with id 71324049004fe36c52159953648258ff1a350640bae548525d2cdb5e6b42dcdb Dec 08 17:53:51 crc kubenswrapper[5113]: I1208 17:53:51.337782 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59"] Dec 08 17:53:52 crc kubenswrapper[5113]: I1208 17:53:52.129083 5113 generic.go:358] "Generic (PLEG): container finished" podID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerID="42f2d9b7dc0e9654882459e61c6fc722453560135e56b5f9196e4c6afe7d2760" exitCode=0 Dec 08 17:53:52 crc kubenswrapper[5113]: I1208 17:53:52.129801 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerDied","Data":"42f2d9b7dc0e9654882459e61c6fc722453560135e56b5f9196e4c6afe7d2760"} Dec 08 17:53:52 crc kubenswrapper[5113]: I1208 17:53:52.144460 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" event={"ID":"82a50f23-a9ba-4f40-ae26-112badc78302","Type":"ContainerStarted","Data":"71324049004fe36c52159953648258ff1a350640bae548525d2cdb5e6b42dcdb"} Dec 08 17:53:53 crc kubenswrapper[5113]: I1208 17:53:53.154285 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" event={"ID":"82a50f23-a9ba-4f40-ae26-112badc78302","Type":"ContainerStarted","Data":"cab7aa87a1c0756179895a266a8c638b0788732a67380a218bc929c38ab2a378"} Dec 08 17:53:53 crc kubenswrapper[5113]: I1208 17:53:53.250240 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-fl5l8" Dec 08 17:53:53 crc kubenswrapper[5113]: I1208 17:53:53.255873 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:53:53 crc kubenswrapper[5113]: I1208 17:53:53.255950 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:53:53 crc kubenswrapper[5113]: I1208 17:53:53.383545 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-r9xfs"] Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.080212 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.172506 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" event={"ID":"dc7fbeca-a482-46cb-b595-bdba3db567f5","Type":"ContainerDied","Data":"3b43c1f7bf768a8014bbc4e6081e4ef2f542bb04083f186fe2f44796bbe51dbe"} Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.172589 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b43c1f7bf768a8014bbc4e6081e4ef2f542bb04083f186fe2f44796bbe51dbe" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.172585 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ecxwxf" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.175371 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvnnn" event={"ID":"b72b075a-b742-4c30-ba4e-e8a31e8b7872","Type":"ContainerStarted","Data":"a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0"} Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.177946 5113 generic.go:358] "Generic (PLEG): container finished" podID="82a50f23-a9ba-4f40-ae26-112badc78302" containerID="cab7aa87a1c0756179895a266a8c638b0788732a67380a218bc929c38ab2a378" exitCode=0 Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.178222 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" event={"ID":"82a50f23-a9ba-4f40-ae26-112badc78302","Type":"ContainerDied","Data":"cab7aa87a1c0756179895a266a8c638b0788732a67380a218bc929c38ab2a378"} Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.192434 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-util\") pod \"dc7fbeca-a482-46cb-b595-bdba3db567f5\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.192537 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fzlg\" (UniqueName: \"kubernetes.io/projected/dc7fbeca-a482-46cb-b595-bdba3db567f5-kube-api-access-9fzlg\") pod \"dc7fbeca-a482-46cb-b595-bdba3db567f5\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.192595 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-bundle\") pod \"dc7fbeca-a482-46cb-b595-bdba3db567f5\" (UID: \"dc7fbeca-a482-46cb-b595-bdba3db567f5\") " Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.194121 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-bundle" (OuterVolumeSpecName: "bundle") pod "dc7fbeca-a482-46cb-b595-bdba3db567f5" (UID: "dc7fbeca-a482-46cb-b595-bdba3db567f5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.210102 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-util" (OuterVolumeSpecName: "util") pod "dc7fbeca-a482-46cb-b595-bdba3db567f5" (UID: "dc7fbeca-a482-46cb-b595-bdba3db567f5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.219450 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7fbeca-a482-46cb-b595-bdba3db567f5-kube-api-access-9fzlg" (OuterVolumeSpecName: "kube-api-access-9fzlg") pod "dc7fbeca-a482-46cb-b595-bdba3db567f5" (UID: "dc7fbeca-a482-46cb-b595-bdba3db567f5"). InnerVolumeSpecName "kube-api-access-9fzlg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.296430 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.296495 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9fzlg\" (UniqueName: \"kubernetes.io/projected/dc7fbeca-a482-46cb-b595-bdba3db567f5-kube-api-access-9fzlg\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:54 crc kubenswrapper[5113]: I1208 17:53:54.296513 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc7fbeca-a482-46cb-b595-bdba3db567f5-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:53:55 crc kubenswrapper[5113]: I1208 17:53:55.193659 5113 generic.go:358] "Generic (PLEG): container finished" podID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerID="a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0" exitCode=0 Dec 08 17:53:55 crc kubenswrapper[5113]: I1208 17:53:55.193971 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvnnn" event={"ID":"b72b075a-b742-4c30-ba4e-e8a31e8b7872","Type":"ContainerDied","Data":"a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0"} Dec 08 17:53:56 crc kubenswrapper[5113]: I1208 17:53:56.227364 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvnnn" event={"ID":"b72b075a-b742-4c30-ba4e-e8a31e8b7872","Type":"ContainerStarted","Data":"e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4"} Dec 08 17:53:56 crc kubenswrapper[5113]: I1208 17:53:56.270196 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:56 crc kubenswrapper[5113]: I1208 17:53:56.272207 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:53:56 crc kubenswrapper[5113]: I1208 17:53:56.973701 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:56 crc kubenswrapper[5113]: I1208 17:53:56.973881 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:57 crc kubenswrapper[5113]: I1208 17:53:57.360094 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:57 crc kubenswrapper[5113]: I1208 17:53:57.554202 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lvnnn" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="registry-server" probeResult="failure" output=< Dec 08 17:53:57 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 08 17:53:57 crc kubenswrapper[5113]: > Dec 08 17:53:57 crc kubenswrapper[5113]: I1208 17:53:57.670125 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lvnnn" podStartSLOduration=7.383862399 podStartE2EDuration="12.670105456s" podCreationTimestamp="2025-12-08 17:53:45 +0000 UTC" firstStartedPulling="2025-12-08 17:53:47.850062839 +0000 UTC m=+793.565855955" lastFinishedPulling="2025-12-08 17:53:53.136305896 +0000 UTC m=+798.852099012" observedRunningTime="2025-12-08 17:53:56.608422377 +0000 UTC m=+802.324215523" watchObservedRunningTime="2025-12-08 17:53:57.670105456 +0000 UTC m=+803.385898572" Dec 08 17:53:58 crc kubenswrapper[5113]: I1208 17:53:58.390841 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.263470 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bznc9"] Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264477 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerName="pull" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264497 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerName="pull" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264512 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerName="util" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264520 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerName="util" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264562 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerName="extract" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264572 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerName="extract" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264589 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerName="extract" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264596 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerName="extract" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264609 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerName="util" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264617 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerName="util" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264648 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerName="pull" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264655 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerName="pull" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264768 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b20573fb-1782-4e38-98e2-ba89edd97c4f" containerName="extract" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.264783 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="dc7fbeca-a482-46cb-b595-bdba3db567f5" containerName="extract" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.290124 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.303761 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.304096 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-6mf5l\"" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.304452 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.416941 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745hv\" (UniqueName: \"kubernetes.io/projected/c6536696-344d-4f39-a4fc-b709e5b39d61-kube-api-access-745hv\") pod \"interconnect-operator-78b9bd8798-bznc9\" (UID: \"c6536696-344d-4f39-a4fc-b709e5b39d61\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.437008 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bznc9"] Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.527747 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-745hv\" (UniqueName: \"kubernetes.io/projected/c6536696-344d-4f39-a4fc-b709e5b39d61-kube-api-access-745hv\") pod \"interconnect-operator-78b9bd8798-bznc9\" (UID: \"c6536696-344d-4f39-a4fc-b709e5b39d61\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.662087 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-745hv\" (UniqueName: \"kubernetes.io/projected/c6536696-344d-4f39-a4fc-b709e5b39d61-kube-api-access-745hv\") pod \"interconnect-operator-78b9bd8798-bznc9\" (UID: \"c6536696-344d-4f39-a4fc-b709e5b39d61\") " pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" Dec 08 17:53:59 crc kubenswrapper[5113]: I1208 17:53:59.716307 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" Dec 08 17:54:00 crc kubenswrapper[5113]: I1208 17:54:00.256374 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxlrf"] Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.252917 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxlrf"] Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.253233 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.260909 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-utilities\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.260977 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p96w7\" (UniqueName: \"kubernetes.io/projected/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-kube-api-access-p96w7\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.261241 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-catalog-content\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.363058 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-utilities\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.363569 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p96w7\" (UniqueName: \"kubernetes.io/projected/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-kube-api-access-p96w7\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.363643 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-catalog-content\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.363978 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-utilities\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.364162 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-catalog-content\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.391648 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p96w7\" (UniqueName: \"kubernetes.io/projected/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-kube-api-access-p96w7\") pod \"community-operators-rxlrf\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:01 crc kubenswrapper[5113]: I1208 17:54:01.676104 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:02 crc kubenswrapper[5113]: I1208 17:54:02.969463 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-7754dcd88f-fvq7h"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.743534 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7754dcd88f-fvq7h"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.743591 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-5lkbk"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.743745 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.747757 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.748276 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-rb9mn\"" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.756761 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-5lkbk"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.757149 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.759924 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.760865 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-ltbdx\"" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.760885 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.788620 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-webhook-cert\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.788759 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-apiservice-cert\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.788828 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2cp\" (UniqueName: \"kubernetes.io/projected/fc492807-55ac-46bb-9974-a035552387e8-kube-api-access-zb2cp\") pod \"obo-prometheus-operator-86648f486b-5lkbk\" (UID: \"fc492807-55ac-46bb-9974-a035552387e8\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.788896 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6s7b\" (UniqueName: \"kubernetes.io/projected/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-kube-api-access-l6s7b\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.852766 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.872620 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.873262 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.878635 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-2d5fg\"" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.887679 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890570 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85\" (UID: \"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890653 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l6s7b\" (UniqueName: \"kubernetes.io/projected/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-kube-api-access-l6s7b\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890752 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc347132-c28a-43ec-9fde-4fbfb793b79f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7\" (UID: \"fc347132-c28a-43ec-9fde-4fbfb793b79f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890807 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-webhook-cert\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890845 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-apiservice-cert\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890872 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc347132-c28a-43ec-9fde-4fbfb793b79f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7\" (UID: \"fc347132-c28a-43ec-9fde-4fbfb793b79f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890901 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85\" (UID: \"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.890926 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2cp\" (UniqueName: \"kubernetes.io/projected/fc492807-55ac-46bb-9974-a035552387e8-kube-api-access-zb2cp\") pod \"obo-prometheus-operator-86648f486b-5lkbk\" (UID: \"fc492807-55ac-46bb-9974-a035552387e8\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.896908 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.906131 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-webhook-cert\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.910357 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.915351 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-apiservice-cert\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.925961 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2cp\" (UniqueName: \"kubernetes.io/projected/fc492807-55ac-46bb-9974-a035552387e8-kube-api-access-zb2cp\") pod \"obo-prometheus-operator-86648f486b-5lkbk\" (UID: \"fc492807-55ac-46bb-9974-a035552387e8\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.926382 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85"] Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.933452 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6s7b\" (UniqueName: \"kubernetes.io/projected/5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412-kube-api-access-l6s7b\") pod \"elastic-operator-7754dcd88f-fvq7h\" (UID: \"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412\") " pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.992013 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85\" (UID: \"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.992193 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc347132-c28a-43ec-9fde-4fbfb793b79f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7\" (UID: \"fc347132-c28a-43ec-9fde-4fbfb793b79f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.992270 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc347132-c28a-43ec-9fde-4fbfb793b79f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7\" (UID: \"fc347132-c28a-43ec-9fde-4fbfb793b79f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:03 crc kubenswrapper[5113]: I1208 17:54:03.992303 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85\" (UID: \"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.000926 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85\" (UID: \"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.001170 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc347132-c28a-43ec-9fde-4fbfb793b79f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7\" (UID: \"fc347132-c28a-43ec-9fde-4fbfb793b79f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.006261 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc347132-c28a-43ec-9fde-4fbfb793b79f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7\" (UID: \"fc347132-c28a-43ec-9fde-4fbfb793b79f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.013655 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85\" (UID: \"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.041416 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-8qn4t"] Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.080488 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.088355 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.249128 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-8qn4t"] Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.249185 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-2wsm6"] Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.249453 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.253864 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-x7fd7\"" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.254116 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.292149 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.297156 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gml7z\" (UniqueName: \"kubernetes.io/projected/f2b774f1-2516-4d43-9ee4-5c9039933dc5-kube-api-access-gml7z\") pod \"observability-operator-78c97476f4-8qn4t\" (UID: \"f2b774f1-2516-4d43-9ee4-5c9039933dc5\") " pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.297276 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2b774f1-2516-4d43-9ee4-5c9039933dc5-observability-operator-tls\") pod \"observability-operator-78c97476f4-8qn4t\" (UID: \"f2b774f1-2516-4d43-9ee4-5c9039933dc5\") " pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.307903 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.399139 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2b774f1-2516-4d43-9ee4-5c9039933dc5-observability-operator-tls\") pod \"observability-operator-78c97476f4-8qn4t\" (UID: \"f2b774f1-2516-4d43-9ee4-5c9039933dc5\") " pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.399250 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gml7z\" (UniqueName: \"kubernetes.io/projected/f2b774f1-2516-4d43-9ee4-5c9039933dc5-kube-api-access-gml7z\") pod \"observability-operator-78c97476f4-8qn4t\" (UID: \"f2b774f1-2516-4d43-9ee4-5c9039933dc5\") " pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.404407 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2b774f1-2516-4d43-9ee4-5c9039933dc5-observability-operator-tls\") pod \"observability-operator-78c97476f4-8qn4t\" (UID: \"f2b774f1-2516-4d43-9ee4-5c9039933dc5\") " pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.417603 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gml7z\" (UniqueName: \"kubernetes.io/projected/f2b774f1-2516-4d43-9ee4-5c9039933dc5-kube-api-access-gml7z\") pod \"observability-operator-78c97476f4-8qn4t\" (UID: \"f2b774f1-2516-4d43-9ee4-5c9039933dc5\") " pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.469321 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-2wsm6"] Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.469550 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.474831 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-lzqjg\"" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.500660 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c60e3c95-ee39-4a90-9c03-b96f05f4ec97-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-2wsm6\" (UID: \"c60e3c95-ee39-4a90-9c03-b96f05f4ec97\") " pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.500738 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5jwj\" (UniqueName: \"kubernetes.io/projected/c60e3c95-ee39-4a90-9c03-b96f05f4ec97-kube-api-access-q5jwj\") pod \"perses-operator-68bdb49cbf-2wsm6\" (UID: \"c60e3c95-ee39-4a90-9c03-b96f05f4ec97\") " pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.573114 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.602881 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c60e3c95-ee39-4a90-9c03-b96f05f4ec97-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-2wsm6\" (UID: \"c60e3c95-ee39-4a90-9c03-b96f05f4ec97\") " pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.602969 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5jwj\" (UniqueName: \"kubernetes.io/projected/c60e3c95-ee39-4a90-9c03-b96f05f4ec97-kube-api-access-q5jwj\") pod \"perses-operator-68bdb49cbf-2wsm6\" (UID: \"c60e3c95-ee39-4a90-9c03-b96f05f4ec97\") " pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.604560 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c60e3c95-ee39-4a90-9c03-b96f05f4ec97-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-2wsm6\" (UID: \"c60e3c95-ee39-4a90-9c03-b96f05f4ec97\") " pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.630566 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5jwj\" (UniqueName: \"kubernetes.io/projected/c60e3c95-ee39-4a90-9c03-b96f05f4ec97-kube-api-access-q5jwj\") pod \"perses-operator-68bdb49cbf-2wsm6\" (UID: \"c60e3c95-ee39-4a90-9c03-b96f05f4ec97\") " pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.878900 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.918957 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4lmf"] Dec 08 17:54:04 crc kubenswrapper[5113]: I1208 17:54:04.919455 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s4lmf" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="registry-server" containerID="cri-o://bc5ca72d7a4fe4364f7a566c5383d4f94b45e82b5117f15e3ec73be8b078d1cb" gracePeriod=2 Dec 08 17:54:05 crc kubenswrapper[5113]: I1208 17:54:05.483779 5113 generic.go:358] "Generic (PLEG): container finished" podID="44406118-eeb7-4eba-b3d1-01873c372290" containerID="bc5ca72d7a4fe4364f7a566c5383d4f94b45e82b5117f15e3ec73be8b078d1cb" exitCode=0 Dec 08 17:54:05 crc kubenswrapper[5113]: I1208 17:54:05.484438 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4lmf" event={"ID":"44406118-eeb7-4eba-b3d1-01873c372290","Type":"ContainerDied","Data":"bc5ca72d7a4fe4364f7a566c5383d4f94b45e82b5117f15e3ec73be8b078d1cb"} Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.323284 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.397412 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.509383 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4lmf" event={"ID":"44406118-eeb7-4eba-b3d1-01873c372290","Type":"ContainerDied","Data":"a47161ecc2c19ecaf1563c73084e59ecdbd8df2cc2ed1f5d7316478d0be68e81"} Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.509442 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a47161ecc2c19ecaf1563c73084e59ecdbd8df2cc2ed1f5d7316478d0be68e81" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.539849 5113 generic.go:358] "Generic (PLEG): container finished" podID="82a50f23-a9ba-4f40-ae26-112badc78302" containerID="f82c0a9564ad20bd3612bfff74b00b9a1bfb953e11ddee273d5a18b7929f4a26" exitCode=0 Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.541472 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" event={"ID":"82a50f23-a9ba-4f40-ae26-112badc78302","Type":"ContainerDied","Data":"f82c0a9564ad20bd3612bfff74b00b9a1bfb953e11ddee273d5a18b7929f4a26"} Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.539871 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.542669 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-7754dcd88f-fvq7h"] Dec 08 17:54:06 crc kubenswrapper[5113]: W1208 17:54:06.562483 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ba6ebf4_af19_4ac0_a94b_ea0e9c3f6412.slice/crio-9f060d13d3ab964f943646ecc3ce5b945c4cd25b1f4b61b0b85b082c6b75b66c WatchSource:0}: Error finding container 9f060d13d3ab964f943646ecc3ce5b945c4cd25b1f4b61b0b85b082c6b75b66c: Status 404 returned error can't find the container with id 9f060d13d3ab964f943646ecc3ce5b945c4cd25b1f4b61b0b85b082c6b75b66c Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.575267 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7"] Dec 08 17:54:06 crc kubenswrapper[5113]: W1208 17:54:06.577831 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc347132_c28a_43ec_9fde_4fbfb793b79f.slice/crio-044cb548ea2d344d39967f867a2816e6ba8f3c812dcb24badc043a4a7a786e37 WatchSource:0}: Error finding container 044cb548ea2d344d39967f867a2816e6ba8f3c812dcb24badc043a4a7a786e37: Status 404 returned error can't find the container with id 044cb548ea2d344d39967f867a2816e6ba8f3c812dcb24badc043a4a7a786e37 Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.654568 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-bznc9"] Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.673106 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-catalog-content\") pod \"44406118-eeb7-4eba-b3d1-01873c372290\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.673236 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2f4k\" (UniqueName: \"kubernetes.io/projected/44406118-eeb7-4eba-b3d1-01873c372290-kube-api-access-l2f4k\") pod \"44406118-eeb7-4eba-b3d1-01873c372290\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.673320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-utilities\") pod \"44406118-eeb7-4eba-b3d1-01873c372290\" (UID: \"44406118-eeb7-4eba-b3d1-01873c372290\") " Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.674941 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-utilities" (OuterVolumeSpecName: "utilities") pod "44406118-eeb7-4eba-b3d1-01873c372290" (UID: "44406118-eeb7-4eba-b3d1-01873c372290"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.698593 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44406118-eeb7-4eba-b3d1-01873c372290-kube-api-access-l2f4k" (OuterVolumeSpecName: "kube-api-access-l2f4k") pod "44406118-eeb7-4eba-b3d1-01873c372290" (UID: "44406118-eeb7-4eba-b3d1-01873c372290"). InnerVolumeSpecName "kube-api-access-l2f4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.775758 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l2f4k\" (UniqueName: \"kubernetes.io/projected/44406118-eeb7-4eba-b3d1-01873c372290-kube-api-access-l2f4k\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.776204 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.802146 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxlrf"] Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.837209 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-5lkbk"] Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.866162 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-8qn4t"] Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.885873 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-2wsm6"] Dec 08 17:54:06 crc kubenswrapper[5113]: I1208 17:54:06.896005 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85"] Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.029513 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44406118-eeb7-4eba-b3d1-01873c372290" (UID: "44406118-eeb7-4eba-b3d1-01873c372290"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.088172 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44406118-eeb7-4eba-b3d1-01873c372290-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.550422 5113 generic.go:358] "Generic (PLEG): container finished" podID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerID="4ce77308355ef661059713b8bb928ce6751ab5afa5bd551a043b6e1c0becbe16" exitCode=0 Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.550617 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxlrf" event={"ID":"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6","Type":"ContainerDied","Data":"4ce77308355ef661059713b8bb928ce6751ab5afa5bd551a043b6e1c0becbe16"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.550683 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxlrf" event={"ID":"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6","Type":"ContainerStarted","Data":"42f9dd87e5d038658ec332f4f9a977693be150c31ccb58b9a1f5feb70510efc3"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.552070 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" event={"ID":"f2b774f1-2516-4d43-9ee4-5c9039933dc5","Type":"ContainerStarted","Data":"1a595909800f1c72f98cc6f87e4e179d590a211965a91b1d5247c48abbe9f818"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.560352 5113 generic.go:358] "Generic (PLEG): container finished" podID="82a50f23-a9ba-4f40-ae26-112badc78302" containerID="e8c2e7d53fb8c0f1654331c7ffe00908b73579f199db6dd60eb8dd369fe19974" exitCode=0 Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.560564 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" event={"ID":"82a50f23-a9ba-4f40-ae26-112badc78302","Type":"ContainerDied","Data":"e8c2e7d53fb8c0f1654331c7ffe00908b73579f199db6dd60eb8dd369fe19974"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.570357 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" event={"ID":"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412","Type":"ContainerStarted","Data":"9f060d13d3ab964f943646ecc3ce5b945c4cd25b1f4b61b0b85b082c6b75b66c"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.572888 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" event={"ID":"fc347132-c28a-43ec-9fde-4fbfb793b79f","Type":"ContainerStarted","Data":"044cb548ea2d344d39967f867a2816e6ba8f3c812dcb24badc043a4a7a786e37"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.577578 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" event={"ID":"fc492807-55ac-46bb-9974-a035552387e8","Type":"ContainerStarted","Data":"01ecd77a527a7969c2c7ede4eea990e2ba5262b52ebbb79666d4e597c275b33d"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.580244 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" event={"ID":"c60e3c95-ee39-4a90-9c03-b96f05f4ec97","Type":"ContainerStarted","Data":"d587810bb2415ba95c674ae37c377a4b162010af9d5e4227c6004856abd776d0"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.581791 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" event={"ID":"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd","Type":"ContainerStarted","Data":"5507779c8af35b942afb249dc12ffee30d94717dc4845d084e7bebcafc8c639e"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.584536 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4lmf" Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.585197 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" event={"ID":"c6536696-344d-4f39-a4fc-b709e5b39d61","Type":"ContainerStarted","Data":"0569c564342e082d48a1ec45f6a43ab23276987dcb740b626484b18a356adffe"} Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.803147 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4lmf"] Dec 08 17:54:07 crc kubenswrapper[5113]: I1208 17:54:07.814945 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s4lmf"] Dec 08 17:54:08 crc kubenswrapper[5113]: I1208 17:54:08.633738 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxlrf" event={"ID":"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6","Type":"ContainerStarted","Data":"49a9d5c3920ebd40979a69d74b2d02410c7b289c45f2802448e1a09acbcee4a5"} Dec 08 17:54:08 crc kubenswrapper[5113]: I1208 17:54:08.697628 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44406118-eeb7-4eba-b3d1-01873c372290" path="/var/lib/kubelet/pods/44406118-eeb7-4eba-b3d1-01873c372290/volumes" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.290234 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.457052 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-util\") pod \"82a50f23-a9ba-4f40-ae26-112badc78302\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.457287 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-bundle\") pod \"82a50f23-a9ba-4f40-ae26-112badc78302\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.457370 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s6rd\" (UniqueName: \"kubernetes.io/projected/82a50f23-a9ba-4f40-ae26-112badc78302-kube-api-access-4s6rd\") pod \"82a50f23-a9ba-4f40-ae26-112badc78302\" (UID: \"82a50f23-a9ba-4f40-ae26-112badc78302\") " Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.460360 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-bundle" (OuterVolumeSpecName: "bundle") pod "82a50f23-a9ba-4f40-ae26-112badc78302" (UID: "82a50f23-a9ba-4f40-ae26-112badc78302"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.481071 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82a50f23-a9ba-4f40-ae26-112badc78302-kube-api-access-4s6rd" (OuterVolumeSpecName: "kube-api-access-4s6rd") pod "82a50f23-a9ba-4f40-ae26-112badc78302" (UID: "82a50f23-a9ba-4f40-ae26-112badc78302"). InnerVolumeSpecName "kube-api-access-4s6rd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.483361 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-util" (OuterVolumeSpecName: "util") pod "82a50f23-a9ba-4f40-ae26-112badc78302" (UID: "82a50f23-a9ba-4f40-ae26-112badc78302"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.561363 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.561423 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4s6rd\" (UniqueName: \"kubernetes.io/projected/82a50f23-a9ba-4f40-ae26-112badc78302-kube-api-access-4s6rd\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.561440 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/82a50f23-a9ba-4f40-ae26-112badc78302-util\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.661952 5113 generic.go:358] "Generic (PLEG): container finished" podID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerID="49a9d5c3920ebd40979a69d74b2d02410c7b289c45f2802448e1a09acbcee4a5" exitCode=0 Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.663263 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxlrf" event={"ID":"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6","Type":"ContainerDied","Data":"49a9d5c3920ebd40979a69d74b2d02410c7b289c45f2802448e1a09acbcee4a5"} Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.684744 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" event={"ID":"82a50f23-a9ba-4f40-ae26-112badc78302","Type":"ContainerDied","Data":"71324049004fe36c52159953648258ff1a350640bae548525d2cdb5e6b42dcdb"} Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.684834 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71324049004fe36c52159953648258ff1a350640bae548525d2cdb5e6b42dcdb" Dec 08 17:54:09 crc kubenswrapper[5113]: I1208 17:54:09.685368 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931alfp59" Dec 08 17:54:10 crc kubenswrapper[5113]: I1208 17:54:10.754297 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxlrf" event={"ID":"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6","Type":"ContainerStarted","Data":"0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01"} Dec 08 17:54:10 crc kubenswrapper[5113]: I1208 17:54:10.789200 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxlrf" podStartSLOduration=10.098211306 podStartE2EDuration="10.789172563s" podCreationTimestamp="2025-12-08 17:54:00 +0000 UTC" firstStartedPulling="2025-12-08 17:54:07.552205038 +0000 UTC m=+813.267998154" lastFinishedPulling="2025-12-08 17:54:08.243166295 +0000 UTC m=+813.958959411" observedRunningTime="2025-12-08 17:54:10.786377451 +0000 UTC m=+816.502170577" watchObservedRunningTime="2025-12-08 17:54:10.789172563 +0000 UTC m=+816.504965679" Dec 08 17:54:11 crc kubenswrapper[5113]: I1208 17:54:11.725926 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:11 crc kubenswrapper[5113]: I1208 17:54:11.726447 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:12 crc kubenswrapper[5113]: I1208 17:54:12.988271 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rxlrf" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="registry-server" probeResult="failure" output=< Dec 08 17:54:12 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Dec 08 17:54:12 crc kubenswrapper[5113]: > Dec 08 17:54:13 crc kubenswrapper[5113]: I1208 17:54:13.214624 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lvnnn"] Dec 08 17:54:13 crc kubenswrapper[5113]: I1208 17:54:13.215078 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lvnnn" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="registry-server" containerID="cri-o://e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4" gracePeriod=2 Dec 08 17:54:13 crc kubenswrapper[5113]: I1208 17:54:13.914753 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:54:13 crc kubenswrapper[5113]: I1208 17:54:13.989231 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-catalog-content\") pod \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " Dec 08 17:54:13 crc kubenswrapper[5113]: I1208 17:54:13.989297 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7vnn\" (UniqueName: \"kubernetes.io/projected/b72b075a-b742-4c30-ba4e-e8a31e8b7872-kube-api-access-w7vnn\") pod \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " Dec 08 17:54:13 crc kubenswrapper[5113]: I1208 17:54:13.989384 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-utilities\") pod \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\" (UID: \"b72b075a-b742-4c30-ba4e-e8a31e8b7872\") " Dec 08 17:54:13 crc kubenswrapper[5113]: I1208 17:54:13.997915 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-utilities" (OuterVolumeSpecName: "utilities") pod "b72b075a-b742-4c30-ba4e-e8a31e8b7872" (UID: "b72b075a-b742-4c30-ba4e-e8a31e8b7872"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.009108 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72b075a-b742-4c30-ba4e-e8a31e8b7872-kube-api-access-w7vnn" (OuterVolumeSpecName: "kube-api-access-w7vnn") pod "b72b075a-b742-4c30-ba4e-e8a31e8b7872" (UID: "b72b075a-b742-4c30-ba4e-e8a31e8b7872"). InnerVolumeSpecName "kube-api-access-w7vnn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.037573 5113 generic.go:358] "Generic (PLEG): container finished" podID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerID="e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4" exitCode=0 Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.037680 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvnnn" event={"ID":"b72b075a-b742-4c30-ba4e-e8a31e8b7872","Type":"ContainerDied","Data":"e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4"} Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.037708 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lvnnn" event={"ID":"b72b075a-b742-4c30-ba4e-e8a31e8b7872","Type":"ContainerDied","Data":"d88c460c99491d1cfe9785eba3d05829f4ab310bca149f5db5abd8091758e0bc"} Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.037724 5113 scope.go:117] "RemoveContainer" containerID="e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.037887 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lvnnn" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.071426 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b72b075a-b742-4c30-ba4e-e8a31e8b7872" (UID: "b72b075a-b742-4c30-ba4e-e8a31e8b7872"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.080899 5113 scope.go:117] "RemoveContainer" containerID="a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.111519 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.111573 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w7vnn\" (UniqueName: \"kubernetes.io/projected/b72b075a-b742-4c30-ba4e-e8a31e8b7872-kube-api-access-w7vnn\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.111587 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72b075a-b742-4c30-ba4e-e8a31e8b7872-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.113259 5113 scope.go:117] "RemoveContainer" containerID="662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.156052 5113 scope.go:117] "RemoveContainer" containerID="e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4" Dec 08 17:54:14 crc kubenswrapper[5113]: E1208 17:54:14.156731 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4\": container with ID starting with e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4 not found: ID does not exist" containerID="e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.156770 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4"} err="failed to get container status \"e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4\": rpc error: code = NotFound desc = could not find container \"e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4\": container with ID starting with e2dec0bc4af8b15d5c24ab6bafcc159d0d03bb9d7b4a30c1488d2b62367d66c4 not found: ID does not exist" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.156801 5113 scope.go:117] "RemoveContainer" containerID="a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0" Dec 08 17:54:14 crc kubenswrapper[5113]: E1208 17:54:14.157328 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0\": container with ID starting with a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0 not found: ID does not exist" containerID="a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.157351 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0"} err="failed to get container status \"a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0\": rpc error: code = NotFound desc = could not find container \"a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0\": container with ID starting with a9bde819f26eec359b85f7f7268b2a7a7b080a2786fd62b5abe719422c130ce0 not found: ID does not exist" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.157366 5113 scope.go:117] "RemoveContainer" containerID="662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f" Dec 08 17:54:14 crc kubenswrapper[5113]: E1208 17:54:14.157574 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f\": container with ID starting with 662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f not found: ID does not exist" containerID="662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.157598 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f"} err="failed to get container status \"662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f\": rpc error: code = NotFound desc = could not find container \"662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f\": container with ID starting with 662c36aa1bc135b12f72c8744639359f132384f0e78ca20cb69a55bc5b6a150f not found: ID does not exist" Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.369352 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lvnnn"] Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.378167 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lvnnn"] Dec 08 17:54:14 crc kubenswrapper[5113]: I1208 17:54:14.695638 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" path="/var/lib/kubelet/pods/b72b075a-b742-4c30-ba4e-e8a31e8b7872/volumes" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.552486 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb"] Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.553990 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82a50f23-a9ba-4f40-ae26-112badc78302" containerName="extract" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554023 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a50f23-a9ba-4f40-ae26-112badc78302" containerName="extract" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554120 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="extract-utilities" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554133 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="extract-utilities" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554147 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="extract-content" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554159 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="extract-content" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554174 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="extract-content" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554185 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="extract-content" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554199 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82a50f23-a9ba-4f40-ae26-112badc78302" containerName="util" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554210 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a50f23-a9ba-4f40-ae26-112badc78302" containerName="util" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554223 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="registry-server" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554234 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="registry-server" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554256 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="registry-server" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554264 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="registry-server" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554282 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="82a50f23-a9ba-4f40-ae26-112badc78302" containerName="pull" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554290 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="82a50f23-a9ba-4f40-ae26-112badc78302" containerName="pull" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554318 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="extract-utilities" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.554331 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="extract-utilities" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.557499 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="82a50f23-a9ba-4f40-ae26-112badc78302" containerName="extract" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.557576 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b72b075a-b742-4c30-ba4e-e8a31e8b7872" containerName="registry-server" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.557607 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="44406118-eeb7-4eba-b3d1-01873c372290" containerName="registry-server" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.720996 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb"] Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.721307 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.735087 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.735184 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.735733 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-pmv99\"" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.839434 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrjmt\" (UniqueName: \"kubernetes.io/projected/1541c5bf-56b3-40b8-953f-94a953365060-kube-api-access-zrjmt\") pod \"cert-manager-operator-controller-manager-64c74584c4-rjtlb\" (UID: \"1541c5bf-56b3-40b8-953f-94a953365060\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.839927 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1541c5bf-56b3-40b8-953f-94a953365060-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rjtlb\" (UID: \"1541c5bf-56b3-40b8-953f-94a953365060\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.941551 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zrjmt\" (UniqueName: \"kubernetes.io/projected/1541c5bf-56b3-40b8-953f-94a953365060-kube-api-access-zrjmt\") pod \"cert-manager-operator-controller-manager-64c74584c4-rjtlb\" (UID: \"1541c5bf-56b3-40b8-953f-94a953365060\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.941634 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1541c5bf-56b3-40b8-953f-94a953365060-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rjtlb\" (UID: \"1541c5bf-56b3-40b8-953f-94a953365060\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.942387 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1541c5bf-56b3-40b8-953f-94a953365060-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rjtlb\" (UID: \"1541c5bf-56b3-40b8-953f-94a953365060\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:16 crc kubenswrapper[5113]: I1208 17:54:16.985837 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrjmt\" (UniqueName: \"kubernetes.io/projected/1541c5bf-56b3-40b8-953f-94a953365060-kube-api-access-zrjmt\") pod \"cert-manager-operator-controller-manager-64c74584c4-rjtlb\" (UID: \"1541c5bf-56b3-40b8-953f-94a953365060\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:17 crc kubenswrapper[5113]: I1208 17:54:17.079381 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" Dec 08 17:54:17 crc kubenswrapper[5113]: I1208 17:54:17.773143 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb"] Dec 08 17:54:18 crc kubenswrapper[5113]: I1208 17:54:18.430337 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" podUID="401e85c2-a1e6-4642-80cf-23e461cef995" containerName="registry" containerID="cri-o://18e9efd04ea4f44e72b9e8240ced2f93470579db03d4a422b8ba77cf0cf98ed7" gracePeriod=30 Dec 08 17:54:20 crc kubenswrapper[5113]: I1208 17:54:20.131345 5113 generic.go:358] "Generic (PLEG): container finished" podID="401e85c2-a1e6-4642-80cf-23e461cef995" containerID="18e9efd04ea4f44e72b9e8240ced2f93470579db03d4a422b8ba77cf0cf98ed7" exitCode=0 Dec 08 17:54:20 crc kubenswrapper[5113]: I1208 17:54:20.131459 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" event={"ID":"401e85c2-a1e6-4642-80cf-23e461cef995","Type":"ContainerDied","Data":"18e9efd04ea4f44e72b9e8240ced2f93470579db03d4a422b8ba77cf0cf98ed7"} Dec 08 17:54:21 crc kubenswrapper[5113]: I1208 17:54:21.736936 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:21 crc kubenswrapper[5113]: I1208 17:54:21.793202 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:23 crc kubenswrapper[5113]: I1208 17:54:23.259791 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:54:23 crc kubenswrapper[5113]: I1208 17:54:23.259910 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:54:23 crc kubenswrapper[5113]: I1208 17:54:23.260011 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:54:23 crc kubenswrapper[5113]: I1208 17:54:23.260861 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e91e9c2d7b1e37ebd3bc5750a4f89f644abb6b97e12e01ad60b986cb9a1422b5"} pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:54:23 crc kubenswrapper[5113]: I1208 17:54:23.260943 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" containerID="cri-o://e91e9c2d7b1e37ebd3bc5750a4f89f644abb6b97e12e01ad60b986cb9a1422b5" gracePeriod=600 Dec 08 17:54:25 crc kubenswrapper[5113]: I1208 17:54:25.123861 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxlrf"] Dec 08 17:54:25 crc kubenswrapper[5113]: I1208 17:54:25.124303 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxlrf" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="registry-server" containerID="cri-o://0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" gracePeriod=2 Dec 08 17:54:25 crc kubenswrapper[5113]: I1208 17:54:25.182687 5113 generic.go:358] "Generic (PLEG): container finished" podID="52658507-b084-49cb-a694-f012d44ccc82" containerID="e91e9c2d7b1e37ebd3bc5750a4f89f644abb6b97e12e01ad60b986cb9a1422b5" exitCode=0 Dec 08 17:54:25 crc kubenswrapper[5113]: I1208 17:54:25.182798 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerDied","Data":"e91e9c2d7b1e37ebd3bc5750a4f89f644abb6b97e12e01ad60b986cb9a1422b5"} Dec 08 17:54:25 crc kubenswrapper[5113]: I1208 17:54:25.183154 5113 scope.go:117] "RemoveContainer" containerID="6354552aeb6257facad872f00416b46d71ae4e5554416dec9e0813960cf8c0f8" Dec 08 17:54:26 crc kubenswrapper[5113]: I1208 17:54:26.016495 5113 patch_prober.go:28] interesting pod/image-registry-66587d64c8-r9xfs container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Dec 08 17:54:26 crc kubenswrapper[5113]: I1208 17:54:26.017112 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" podUID="401e85c2-a1e6-4642-80cf-23e461cef995" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Dec 08 17:54:28 crc kubenswrapper[5113]: I1208 17:54:28.319307 5113 generic.go:358] "Generic (PLEG): container finished" podID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" exitCode=0 Dec 08 17:54:28 crc kubenswrapper[5113]: I1208 17:54:28.319398 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxlrf" event={"ID":"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6","Type":"ContainerDied","Data":"0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01"} Dec 08 17:54:31 crc kubenswrapper[5113]: E1208 17:54:31.783792 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:54:31 crc kubenswrapper[5113]: E1208 17:54:31.784409 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:54:31 crc kubenswrapper[5113]: E1208 17:54:31.786482 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:54:31 crc kubenswrapper[5113]: E1208 17:54:31.786539 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-rxlrf" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="registry-server" probeResult="unknown" Dec 08 17:54:35 crc kubenswrapper[5113]: W1208 17:54:35.596713 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1541c5bf_56b3_40b8_953f_94a953365060.slice/crio-f68de22dda30b437d500c5f19fd8ed54dfc792f1853f3f4369bddc8e05d99781 WatchSource:0}: Error finding container f68de22dda30b437d500c5f19fd8ed54dfc792f1853f3f4369bddc8e05d99781: Status 404 returned error can't find the container with id f68de22dda30b437d500c5f19fd8ed54dfc792f1853f3f4369bddc8e05d99781 Dec 08 17:54:36 crc kubenswrapper[5113]: I1208 17:54:36.383125 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" event={"ID":"1541c5bf-56b3-40b8-953f-94a953365060","Type":"ContainerStarted","Data":"f68de22dda30b437d500c5f19fd8ed54dfc792f1853f3f4369bddc8e05d99781"} Dec 08 17:54:41 crc kubenswrapper[5113]: I1208 17:54:41.016716 5113 patch_prober.go:28] interesting pod/image-registry-66587d64c8-r9xfs container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": context deadline exceeded" start-of-body= Dec 08 17:54:41 crc kubenswrapper[5113]: I1208 17:54:41.016812 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" podUID="401e85c2-a1e6-4642-80cf-23e461cef995" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": context deadline exceeded" Dec 08 17:54:41 crc kubenswrapper[5113]: E1208 17:54:41.739355 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:54:41 crc kubenswrapper[5113]: E1208 17:54:41.740158 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:54:41 crc kubenswrapper[5113]: E1208 17:54:41.740746 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 17:54:41 crc kubenswrapper[5113]: E1208 17:54:41.740799 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-rxlrf" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="registry-server" probeResult="unknown" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.113959 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.120190 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163118 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-registry-certificates\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163187 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-utilities\") pod \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163258 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-catalog-content\") pod \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163315 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-registry-tls\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163346 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p96w7\" (UniqueName: \"kubernetes.io/projected/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-kube-api-access-p96w7\") pod \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\" (UID: \"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163489 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/401e85c2-a1e6-4642-80cf-23e461cef995-installation-pull-secrets\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163772 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163816 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-bound-sa-token\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163887 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/401e85c2-a1e6-4642-80cf-23e461cef995-ca-trust-extracted\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163906 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftj9s\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-kube-api-access-ftj9s\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.163945 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-trusted-ca\") pod \"401e85c2-a1e6-4642-80cf-23e461cef995\" (UID: \"401e85c2-a1e6-4642-80cf-23e461cef995\") " Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.165084 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.166078 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.167716 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-utilities" (OuterVolumeSpecName: "utilities") pod "34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" (UID: "34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.173118 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.173147 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-kube-api-access-p96w7" (OuterVolumeSpecName: "kube-api-access-p96w7") pod "34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" (UID: "34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6"). InnerVolumeSpecName "kube-api-access-p96w7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.173465 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401e85c2-a1e6-4642-80cf-23e461cef995-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.173686 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-kube-api-access-ftj9s" (OuterVolumeSpecName: "kube-api-access-ftj9s") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "kube-api-access-ftj9s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.182530 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.185282 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401e85c2-a1e6-4642-80cf-23e461cef995-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.186726 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "401e85c2-a1e6-4642-80cf-23e461cef995" (UID: "401e85c2-a1e6-4642-80cf-23e461cef995"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.220623 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" (UID: "34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.266696 5113 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/401e85c2-a1e6-4642-80cf-23e461cef995-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267125 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267203 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/401e85c2-a1e6-4642-80cf-23e461cef995-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267269 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftj9s\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-kube-api-access-ftj9s\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267347 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267429 5113 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/401e85c2-a1e6-4642-80cf-23e461cef995-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267501 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267571 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267638 5113 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/401e85c2-a1e6-4642-80cf-23e461cef995-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.267708 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p96w7\" (UniqueName: \"kubernetes.io/projected/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6-kube-api-access-p96w7\") on node \"crc\" DevicePath \"\"" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.443254 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" event={"ID":"401e85c2-a1e6-4642-80cf-23e461cef995","Type":"ContainerDied","Data":"0083c20293c97010ff9040f46107b86528768f11ead7c88f45b28058f5b9cf2a"} Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.443388 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-r9xfs" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.448318 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxlrf" event={"ID":"34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6","Type":"ContainerDied","Data":"42f9dd87e5d038658ec332f4f9a977693be150c31ccb58b9a1f5feb70510efc3"} Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.448469 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxlrf" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.491159 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-r9xfs"] Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.509593 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-r9xfs"] Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.524126 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxlrf"] Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.543644 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxlrf"] Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.688749 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" path="/var/lib/kubelet/pods/34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6/volumes" Dec 08 17:54:44 crc kubenswrapper[5113]: I1208 17:54:44.689723 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401e85c2-a1e6-4642-80cf-23e461cef995" path="/var/lib/kubelet/pods/401e85c2-a1e6-4642-80cf-23e461cef995/volumes" Dec 08 17:54:45 crc kubenswrapper[5113]: I1208 17:54:45.137440 5113 scope.go:117] "RemoveContainer" containerID="18e9efd04ea4f44e72b9e8240ced2f93470579db03d4a422b8ba77cf0cf98ed7" Dec 08 17:54:45 crc kubenswrapper[5113]: I1208 17:54:45.986340 5113 scope.go:117] "RemoveContainer" containerID="0d8fefc8a93f9d376c0776687a5e33f3c3be56f92ad73c06a3e95967f1a4bd01" Dec 08 17:54:46 crc kubenswrapper[5113]: I1208 17:54:46.022476 5113 scope.go:117] "RemoveContainer" containerID="49a9d5c3920ebd40979a69d74b2d02410c7b289c45f2802448e1a09acbcee4a5" Dec 08 17:54:46 crc kubenswrapper[5113]: I1208 17:54:46.102805 5113 scope.go:117] "RemoveContainer" containerID="4ce77308355ef661059713b8bb928ce6751ab5afa5bd551a043b6e1c0becbe16" Dec 08 17:54:46 crc kubenswrapper[5113]: I1208 17:54:46.465212 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"13d2c1fe38ff6a7a0cac1ade14681ccc0e31e7fbc1ba06630b2782faab18303e"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.474358 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" event={"ID":"5adc3cb8-9a79-4d6a-bf1b-a4c1fbf2a3dd","Type":"ContainerStarted","Data":"2b8a6aa20f4b022741963e547c1f67fe676176f8901e5c6527ddd97d2b4b9aab"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.477303 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" event={"ID":"c6536696-344d-4f39-a4fc-b709e5b39d61","Type":"ContainerStarted","Data":"6c210cad6467518a12f96b3f19483d34c628d1343fa9b6484920a0611662dfac"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.485360 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" event={"ID":"f2b774f1-2516-4d43-9ee4-5c9039933dc5","Type":"ContainerStarted","Data":"8da94fd83b9f588aa42873909e98dde4b05d03e0e4c6411097e745a17213680b"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.486010 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.487778 5113 patch_prober.go:28] interesting pod/observability-operator-78c97476f4-8qn4t container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.52:8081/healthz\": dial tcp 10.217.0.52:8081: connect: connection refused" start-of-body= Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.487847 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" podUID="f2b774f1-2516-4d43-9ee4-5c9039933dc5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/healthz\": dial tcp 10.217.0.52:8081: connect: connection refused" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.490071 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" event={"ID":"5ba6ebf4-af19-4ac0-a94b-ea0e9c3f6412","Type":"ContainerStarted","Data":"1cfe4599a4cf503da85546d9a5a8153c31caddf56b4911396cf72cd91f6acfae"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.493910 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" event={"ID":"fc347132-c28a-43ec-9fde-4fbfb793b79f","Type":"ContainerStarted","Data":"f62aae22ccfa90e7486e3145ac9ab8098b7f35622736a3595f5be7c27a060f97"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.513047 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-nxz85" podStartSLOduration=5.513266571 podStartE2EDuration="44.513011823s" podCreationTimestamp="2025-12-08 17:54:03 +0000 UTC" firstStartedPulling="2025-12-08 17:54:06.995517923 +0000 UTC m=+812.711311039" lastFinishedPulling="2025-12-08 17:54:45.995263165 +0000 UTC m=+851.711056291" observedRunningTime="2025-12-08 17:54:47.510363326 +0000 UTC m=+853.226156452" watchObservedRunningTime="2025-12-08 17:54:47.513011823 +0000 UTC m=+853.228804939" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.514448 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" event={"ID":"1541c5bf-56b3-40b8-953f-94a953365060","Type":"ContainerStarted","Data":"dae155f3169383f859ae079be6b1a86c903ed42766324bd3366e6acca49671f8"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.516686 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" event={"ID":"fc492807-55ac-46bb-9974-a035552387e8","Type":"ContainerStarted","Data":"3a9564d7a67b55fcb72a6dce7b32b88a3bae444396cc195464c31c70316f2f6b"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.520618 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" event={"ID":"c60e3c95-ee39-4a90-9c03-b96f05f4ec97","Type":"ContainerStarted","Data":"41ff38c1cc8fcc04a83bbceb807745bffcc229011215da3053cc9e7796e06924"} Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.520684 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.564795 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-7754dcd88f-fvq7h" podStartSLOduration=6.088779857 podStartE2EDuration="45.564776453s" podCreationTimestamp="2025-12-08 17:54:02 +0000 UTC" firstStartedPulling="2025-12-08 17:54:06.568577467 +0000 UTC m=+812.284370583" lastFinishedPulling="2025-12-08 17:54:46.044574053 +0000 UTC m=+851.760367179" observedRunningTime="2025-12-08 17:54:47.550339305 +0000 UTC m=+853.266132421" watchObservedRunningTime="2025-12-08 17:54:47.564776453 +0000 UTC m=+853.280569569" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.624561 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" podStartSLOduration=4.5212719660000005 podStartE2EDuration="43.624537667s" podCreationTimestamp="2025-12-08 17:54:04 +0000 UTC" firstStartedPulling="2025-12-08 17:54:06.865283333 +0000 UTC m=+812.581076449" lastFinishedPulling="2025-12-08 17:54:45.968549034 +0000 UTC m=+851.684342150" observedRunningTime="2025-12-08 17:54:47.600076813 +0000 UTC m=+853.315869939" watchObservedRunningTime="2025-12-08 17:54:47.624537667 +0000 UTC m=+853.340330783" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.626171 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-bznc9" podStartSLOduration=9.319216734 podStartE2EDuration="48.626162089s" podCreationTimestamp="2025-12-08 17:53:59 +0000 UTC" firstStartedPulling="2025-12-08 17:54:06.679532866 +0000 UTC m=+812.395325982" lastFinishedPulling="2025-12-08 17:54:45.986478231 +0000 UTC m=+851.702271337" observedRunningTime="2025-12-08 17:54:47.621570021 +0000 UTC m=+853.337363137" watchObservedRunningTime="2025-12-08 17:54:47.626162089 +0000 UTC m=+853.341955205" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.664569 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7fd44b68c6-9zvh7" podStartSLOduration=5.323644687 podStartE2EDuration="44.664531307s" podCreationTimestamp="2025-12-08 17:54:03 +0000 UTC" firstStartedPulling="2025-12-08 17:54:06.605619742 +0000 UTC m=+812.321412858" lastFinishedPulling="2025-12-08 17:54:45.946506362 +0000 UTC m=+851.662299478" observedRunningTime="2025-12-08 17:54:47.650472878 +0000 UTC m=+853.366265994" watchObservedRunningTime="2025-12-08 17:54:47.664531307 +0000 UTC m=+853.380324423" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.718159 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rjtlb" podStartSLOduration=21.401799411 podStartE2EDuration="31.718107913s" podCreationTimestamp="2025-12-08 17:54:16 +0000 UTC" firstStartedPulling="2025-12-08 17:54:35.680027761 +0000 UTC m=+841.395820877" lastFinishedPulling="2025-12-08 17:54:45.996336263 +0000 UTC m=+851.712129379" observedRunningTime="2025-12-08 17:54:47.677631441 +0000 UTC m=+853.393424557" watchObservedRunningTime="2025-12-08 17:54:47.718107913 +0000 UTC m=+853.433901039" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.728746 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-5lkbk" podStartSLOduration=5.612850261 podStartE2EDuration="44.728698913s" podCreationTimestamp="2025-12-08 17:54:03 +0000 UTC" firstStartedPulling="2025-12-08 17:54:06.852672661 +0000 UTC m=+812.568465777" lastFinishedPulling="2025-12-08 17:54:45.968521313 +0000 UTC m=+851.684314429" observedRunningTime="2025-12-08 17:54:47.706685522 +0000 UTC m=+853.422478648" watchObservedRunningTime="2025-12-08 17:54:47.728698913 +0000 UTC m=+853.444492029" Dec 08 17:54:47 crc kubenswrapper[5113]: I1208 17:54:47.756919 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" podStartSLOduration=5.540071402 podStartE2EDuration="43.756896902s" podCreationTimestamp="2025-12-08 17:54:04 +0000 UTC" firstStartedPulling="2025-12-08 17:54:06.911170252 +0000 UTC m=+812.626963368" lastFinishedPulling="2025-12-08 17:54:45.127995752 +0000 UTC m=+850.843788868" observedRunningTime="2025-12-08 17:54:47.753392803 +0000 UTC m=+853.469185929" watchObservedRunningTime="2025-12-08 17:54:47.756896902 +0000 UTC m=+853.472690018" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.165837 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166666 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="registry-server" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166695 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="registry-server" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166721 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="extract-content" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166729 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="extract-content" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166745 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="extract-utilities" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166754 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="extract-utilities" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166775 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="401e85c2-a1e6-4642-80cf-23e461cef995" containerName="registry" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166782 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="401e85c2-a1e6-4642-80cf-23e461cef995" containerName="registry" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166920 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="401e85c2-a1e6-4642-80cf-23e461cef995" containerName="registry" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.166941 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="34a07c9d-6a37-4bbb-82ab-d6eaaad0d1d6" containerName="registry-server" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.173114 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.176178 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.176389 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.176385 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-6vs9x\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.176524 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.176881 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.177076 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.177101 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.177319 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.178156 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.193860 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224014 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224113 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224166 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224224 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/620ef7f7-9ff0-46da-8684-a4f866a0adc2-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224248 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224269 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224300 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224357 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224563 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224615 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224643 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224715 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224777 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224902 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.224993 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.326324 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.326409 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.326628 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.326778 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.326822 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.326880 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.326973 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.327069 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/620ef7f7-9ff0-46da-8684-a4f866a0adc2-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.327223 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.327245 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.327262 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328247 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.327428 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.327916 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328277 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.327604 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328370 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328404 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328437 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328449 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328608 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.328951 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.329093 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.339870 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/620ef7f7-9ff0-46da-8684-a4f866a0adc2-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.340780 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.341234 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.343680 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.346245 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.347175 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.352672 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/620ef7f7-9ff0-46da-8684-a4f866a0adc2-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"620ef7f7-9ff0-46da-8684-a4f866a0adc2\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.497209 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:54:48 crc kubenswrapper[5113]: I1208 17:54:48.536615 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-8qn4t" Dec 08 17:54:49 crc kubenswrapper[5113]: I1208 17:54:49.038410 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:54:49 crc kubenswrapper[5113]: I1208 17:54:49.536076 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"620ef7f7-9ff0-46da-8684-a4f866a0adc2","Type":"ContainerStarted","Data":"5d06254f3525ee310bfd302322065d1f733b0e57fc93ec8de7182701230297cb"} Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.820937 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5"] Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.831477 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.836606 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.836971 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-r6k86\"" Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.838535 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.841102 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5"] Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.926207 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb6nf\" (UniqueName: \"kubernetes.io/projected/2d2e3a5b-6197-4df2-a920-dae55a95bb41-kube-api-access-zb6nf\") pod \"cert-manager-webhook-7894b5b9b4-8h9p5\" (UID: \"2d2e3a5b-6197-4df2-a920-dae55a95bb41\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:51 crc kubenswrapper[5113]: I1208 17:54:51.926270 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d2e3a5b-6197-4df2-a920-dae55a95bb41-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-8h9p5\" (UID: \"2d2e3a5b-6197-4df2-a920-dae55a95bb41\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:52 crc kubenswrapper[5113]: I1208 17:54:52.027318 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zb6nf\" (UniqueName: \"kubernetes.io/projected/2d2e3a5b-6197-4df2-a920-dae55a95bb41-kube-api-access-zb6nf\") pod \"cert-manager-webhook-7894b5b9b4-8h9p5\" (UID: \"2d2e3a5b-6197-4df2-a920-dae55a95bb41\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:52 crc kubenswrapper[5113]: I1208 17:54:52.027383 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d2e3a5b-6197-4df2-a920-dae55a95bb41-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-8h9p5\" (UID: \"2d2e3a5b-6197-4df2-a920-dae55a95bb41\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:52 crc kubenswrapper[5113]: I1208 17:54:52.067555 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb6nf\" (UniqueName: \"kubernetes.io/projected/2d2e3a5b-6197-4df2-a920-dae55a95bb41-kube-api-access-zb6nf\") pod \"cert-manager-webhook-7894b5b9b4-8h9p5\" (UID: \"2d2e3a5b-6197-4df2-a920-dae55a95bb41\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:52 crc kubenswrapper[5113]: I1208 17:54:52.075843 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2d2e3a5b-6197-4df2-a920-dae55a95bb41-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-8h9p5\" (UID: \"2d2e3a5b-6197-4df2-a920-dae55a95bb41\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:52 crc kubenswrapper[5113]: I1208 17:54:52.152669 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.405512 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s"] Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.811045 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.818535 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s"] Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.822712 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-npk8j\"" Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.871541 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a4568c0-b8a6-49fe-bf65-bd229e2c3870-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-4mf2s\" (UID: \"4a4568c0-b8a6-49fe-bf65-bd229e2c3870\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.871720 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzsg\" (UniqueName: \"kubernetes.io/projected/4a4568c0-b8a6-49fe-bf65-bd229e2c3870-kube-api-access-mrzsg\") pod \"cert-manager-cainjector-7dbf76d5c8-4mf2s\" (UID: \"4a4568c0-b8a6-49fe-bf65-bd229e2c3870\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.972891 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a4568c0-b8a6-49fe-bf65-bd229e2c3870-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-4mf2s\" (UID: \"4a4568c0-b8a6-49fe-bf65-bd229e2c3870\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:54 crc kubenswrapper[5113]: I1208 17:54:54.973393 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzsg\" (UniqueName: \"kubernetes.io/projected/4a4568c0-b8a6-49fe-bf65-bd229e2c3870-kube-api-access-mrzsg\") pod \"cert-manager-cainjector-7dbf76d5c8-4mf2s\" (UID: \"4a4568c0-b8a6-49fe-bf65-bd229e2c3870\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:55 crc kubenswrapper[5113]: I1208 17:54:55.000221 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a4568c0-b8a6-49fe-bf65-bd229e2c3870-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-4mf2s\" (UID: \"4a4568c0-b8a6-49fe-bf65-bd229e2c3870\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:55 crc kubenswrapper[5113]: I1208 17:54:55.003184 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzsg\" (UniqueName: \"kubernetes.io/projected/4a4568c0-b8a6-49fe-bf65-bd229e2c3870-kube-api-access-mrzsg\") pod \"cert-manager-cainjector-7dbf76d5c8-4mf2s\" (UID: \"4a4568c0-b8a6-49fe-bf65-bd229e2c3870\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:55 crc kubenswrapper[5113]: I1208 17:54:55.139268 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" Dec 08 17:54:56 crc kubenswrapper[5113]: I1208 17:54:56.914405 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5"] Dec 08 17:54:56 crc kubenswrapper[5113]: W1208 17:54:56.942141 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d2e3a5b_6197_4df2_a920_dae55a95bb41.slice/crio-8ade51c99c150e7b40f1009535a04f0e02f236a903ec9f9ad652d92eece1b2a2 WatchSource:0}: Error finding container 8ade51c99c150e7b40f1009535a04f0e02f236a903ec9f9ad652d92eece1b2a2: Status 404 returned error can't find the container with id 8ade51c99c150e7b40f1009535a04f0e02f236a903ec9f9ad652d92eece1b2a2 Dec 08 17:54:57 crc kubenswrapper[5113]: I1208 17:54:57.185473 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s"] Dec 08 17:54:57 crc kubenswrapper[5113]: W1208 17:54:57.196124 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a4568c0_b8a6_49fe_bf65_bd229e2c3870.slice/crio-755f6e0a36a12aff962911d6fce7d9eedd30821f44e3db36c9fbda963d4c5071 WatchSource:0}: Error finding container 755f6e0a36a12aff962911d6fce7d9eedd30821f44e3db36c9fbda963d4c5071: Status 404 returned error can't find the container with id 755f6e0a36a12aff962911d6fce7d9eedd30821f44e3db36c9fbda963d4c5071 Dec 08 17:54:57 crc kubenswrapper[5113]: I1208 17:54:57.713699 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" event={"ID":"2d2e3a5b-6197-4df2-a920-dae55a95bb41","Type":"ContainerStarted","Data":"8ade51c99c150e7b40f1009535a04f0e02f236a903ec9f9ad652d92eece1b2a2"} Dec 08 17:54:57 crc kubenswrapper[5113]: I1208 17:54:57.717889 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" event={"ID":"4a4568c0-b8a6-49fe-bf65-bd229e2c3870","Type":"ContainerStarted","Data":"755f6e0a36a12aff962911d6fce7d9eedd30821f44e3db36c9fbda963d4c5071"} Dec 08 17:54:58 crc kubenswrapper[5113]: I1208 17:54:58.534610 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-2wsm6" Dec 08 17:55:08 crc kubenswrapper[5113]: I1208 17:55:08.823742 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"620ef7f7-9ff0-46da-8684-a4f866a0adc2","Type":"ContainerStarted","Data":"e6dfafab5c0a18dda61c4e225bb0884db5abb84f6c11296ec45b9dec846b0356"} Dec 08 17:55:09 crc kubenswrapper[5113]: I1208 17:55:09.075507 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:09 crc kubenswrapper[5113]: I1208 17:55:09.097473 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.097655 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-4zg8c"] Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.467779 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-4zg8c"] Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.468059 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.471068 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-qvh5q\"" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.511231 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/130cd725-5e3f-42b0-8c0a-c5af8e67f164-bound-sa-token\") pod \"cert-manager-858d87f86b-4zg8c\" (UID: \"130cd725-5e3f-42b0-8c0a-c5af8e67f164\") " pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.511279 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7862\" (UniqueName: \"kubernetes.io/projected/130cd725-5e3f-42b0-8c0a-c5af8e67f164-kube-api-access-m7862\") pod \"cert-manager-858d87f86b-4zg8c\" (UID: \"130cd725-5e3f-42b0-8c0a-c5af8e67f164\") " pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.612728 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/130cd725-5e3f-42b0-8c0a-c5af8e67f164-bound-sa-token\") pod \"cert-manager-858d87f86b-4zg8c\" (UID: \"130cd725-5e3f-42b0-8c0a-c5af8e67f164\") " pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.612788 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7862\" (UniqueName: \"kubernetes.io/projected/130cd725-5e3f-42b0-8c0a-c5af8e67f164-kube-api-access-m7862\") pod \"cert-manager-858d87f86b-4zg8c\" (UID: \"130cd725-5e3f-42b0-8c0a-c5af8e67f164\") " pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.634418 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/130cd725-5e3f-42b0-8c0a-c5af8e67f164-bound-sa-token\") pod \"cert-manager-858d87f86b-4zg8c\" (UID: \"130cd725-5e3f-42b0-8c0a-c5af8e67f164\") " pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.636138 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7862\" (UniqueName: \"kubernetes.io/projected/130cd725-5e3f-42b0-8c0a-c5af8e67f164-kube-api-access-m7862\") pod \"cert-manager-858d87f86b-4zg8c\" (UID: \"130cd725-5e3f-42b0-8c0a-c5af8e67f164\") " pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:10 crc kubenswrapper[5113]: I1208 17:55:10.789804 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-4zg8c" Dec 08 17:55:11 crc kubenswrapper[5113]: I1208 17:55:11.849727 5113 generic.go:358] "Generic (PLEG): container finished" podID="620ef7f7-9ff0-46da-8684-a4f866a0adc2" containerID="e6dfafab5c0a18dda61c4e225bb0884db5abb84f6c11296ec45b9dec846b0356" exitCode=0 Dec 08 17:55:11 crc kubenswrapper[5113]: I1208 17:55:11.849778 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"620ef7f7-9ff0-46da-8684-a4f866a0adc2","Type":"ContainerDied","Data":"e6dfafab5c0a18dda61c4e225bb0884db5abb84f6c11296ec45b9dec846b0356"} Dec 08 17:55:19 crc kubenswrapper[5113]: I1208 17:55:19.648541 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-4zg8c"] Dec 08 17:55:19 crc kubenswrapper[5113]: W1208 17:55:19.653895 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod130cd725_5e3f_42b0_8c0a_c5af8e67f164.slice/crio-3e5453da298bf9556bcb92dfcb74da7079ed1a3d250b739a7acf46b5a309b602 WatchSource:0}: Error finding container 3e5453da298bf9556bcb92dfcb74da7079ed1a3d250b739a7acf46b5a309b602: Status 404 returned error can't find the container with id 3e5453da298bf9556bcb92dfcb74da7079ed1a3d250b739a7acf46b5a309b602 Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.521481 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-4zg8c" event={"ID":"130cd725-5e3f-42b0-8c0a-c5af8e67f164","Type":"ContainerStarted","Data":"a95ff15eb03621c00c4734335dc64743af72706f9225ca140a6ebd05ec76276f"} Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.522175 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-4zg8c" event={"ID":"130cd725-5e3f-42b0-8c0a-c5af8e67f164","Type":"ContainerStarted","Data":"3e5453da298bf9556bcb92dfcb74da7079ed1a3d250b739a7acf46b5a309b602"} Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.523724 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" event={"ID":"2d2e3a5b-6197-4df2-a920-dae55a95bb41","Type":"ContainerStarted","Data":"0d2fbcf406230110e7f7c567a7fddc039268aa5ab55f41fd350cb5ed015a7768"} Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.523868 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.525719 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" event={"ID":"4a4568c0-b8a6-49fe-bf65-bd229e2c3870","Type":"ContainerStarted","Data":"cf4fa9953de415c55f74aea8c11c61f983ce9e2eae4f64cbf3030719536acb56"} Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.529749 5113 generic.go:358] "Generic (PLEG): container finished" podID="620ef7f7-9ff0-46da-8684-a4f866a0adc2" containerID="2913f2eab0f58de90f446ac351b94ada1737216ea05483cc4ad64902696ab657" exitCode=0 Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.529839 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"620ef7f7-9ff0-46da-8684-a4f866a0adc2","Type":"ContainerDied","Data":"2913f2eab0f58de90f446ac351b94ada1737216ea05483cc4ad64902696ab657"} Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.545259 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-4zg8c" podStartSLOduration=10.545237237 podStartE2EDuration="10.545237237s" podCreationTimestamp="2025-12-08 17:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 17:55:20.542880887 +0000 UTC m=+886.258674003" watchObservedRunningTime="2025-12-08 17:55:20.545237237 +0000 UTC m=+886.261030363" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.581517 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" podStartSLOduration=6.990051332 podStartE2EDuration="29.581482661s" podCreationTimestamp="2025-12-08 17:54:51 +0000 UTC" firstStartedPulling="2025-12-08 17:54:56.946746301 +0000 UTC m=+862.662539417" lastFinishedPulling="2025-12-08 17:55:19.53817763 +0000 UTC m=+885.253970746" observedRunningTime="2025-12-08 17:55:20.574296348 +0000 UTC m=+886.290089464" watchObservedRunningTime="2025-12-08 17:55:20.581482661 +0000 UTC m=+886.297275777" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.643159 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4mf2s" podStartSLOduration=4.292753981 podStartE2EDuration="26.643133163s" podCreationTimestamp="2025-12-08 17:54:54 +0000 UTC" firstStartedPulling="2025-12-08 17:54:57.198570052 +0000 UTC m=+862.914363168" lastFinishedPulling="2025-12-08 17:55:19.548949234 +0000 UTC m=+885.264742350" observedRunningTime="2025-12-08 17:55:20.63281684 +0000 UTC m=+886.348609956" watchObservedRunningTime="2025-12-08 17:55:20.643133163 +0000 UTC m=+886.358926279" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.923402 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.937086 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.942169 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.942273 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.942298 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.942505 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:20 crc kubenswrapper[5113]: I1208 17:55:20.942713 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-q2lg2\"" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046362 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046450 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dhqs\" (UniqueName: \"kubernetes.io/projected/99820e98-44ea-4ad6-ae14-c15472571a3b-kube-api-access-4dhqs\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046506 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046589 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046677 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046715 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046742 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046811 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046840 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046874 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.046909 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.147895 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.147945 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.147966 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.147990 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148009 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148029 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148066 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148095 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148117 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4dhqs\" (UniqueName: \"kubernetes.io/projected/99820e98-44ea-4ad6-ae14-c15472571a3b-kube-api-access-4dhqs\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148137 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148151 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148227 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148759 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.148841 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.149029 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.149794 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.149891 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.150067 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.150106 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.150173 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.150403 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.156816 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.164646 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.175184 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dhqs\" (UniqueName: \"kubernetes.io/projected/99820e98-44ea-4ad6-ae14-c15472571a3b-kube-api-access-4dhqs\") pod \"service-telemetry-operator-1-build\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.257192 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.528227 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.539267 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"620ef7f7-9ff0-46da-8684-a4f866a0adc2","Type":"ContainerStarted","Data":"b411d953ddfae35d5f4e2b3982233ac0a4efda512a94ef47c3cd2e0cf36ab0da"} Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.541591 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:21 crc kubenswrapper[5113]: I1208 17:55:21.581954 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=14.590409612 podStartE2EDuration="33.5819286s" podCreationTimestamp="2025-12-08 17:54:48 +0000 UTC" firstStartedPulling="2025-12-08 17:54:49.050021854 +0000 UTC m=+854.765814970" lastFinishedPulling="2025-12-08 17:55:08.041540842 +0000 UTC m=+873.757333958" observedRunningTime="2025-12-08 17:55:21.574239044 +0000 UTC m=+887.290032160" watchObservedRunningTime="2025-12-08 17:55:21.5819286 +0000 UTC m=+887.297721716" Dec 08 17:55:22 crc kubenswrapper[5113]: I1208 17:55:22.549085 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"99820e98-44ea-4ad6-ae14-c15472571a3b","Type":"ContainerStarted","Data":"7811c13f669cafa02c1d00f0ac4905ffec26ff630fd3fc76c6299cdffe45ae5a"} Dec 08 17:55:26 crc kubenswrapper[5113]: I1208 17:55:26.546695 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-8h9p5" Dec 08 17:55:31 crc kubenswrapper[5113]: I1208 17:55:31.179565 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:32 crc kubenswrapper[5113]: I1208 17:55:32.654138 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="620ef7f7-9ff0-46da-8684-a4f866a0adc2" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:55:32 crc kubenswrapper[5113]: {"timestamp": "2025-12-08T17:55:32+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:55:32 crc kubenswrapper[5113]: > Dec 08 17:55:33 crc kubenswrapper[5113]: I1208 17:55:33.241150 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.334372 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.338833 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.342008 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.345427 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.353359 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.386610 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.387150 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.387355 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.387538 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.387645 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.387788 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-kube-api-access-ftw9f\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.387916 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.388226 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.388360 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.388489 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.388636 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.388790 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.427567 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g9mkp_c4621882-3d98-4910-9263-5959d2302427/kube-multus/0.log" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.427567 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g9mkp_c4621882-3d98-4910-9263-5959d2302427/kube-multus/0.log" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.429627 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.429647 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.490390 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.490799 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.490906 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.490996 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491100 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491148 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491186 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-kube-api-access-ftw9f\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491329 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491441 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491590 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491673 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.491995 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.492243 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.492436 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.508867 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.509777 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.510124 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.510277 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.510749 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.511123 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.511495 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.511691 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.511727 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.512010 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-kube-api-access-ftw9f\") pod \"service-telemetry-operator-2-build\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:35 crc kubenswrapper[5113]: I1208 17:55:35.662104 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:36 crc kubenswrapper[5113]: I1208 17:55:36.024353 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:36 crc kubenswrapper[5113]: I1208 17:55:36.785358 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"ccca3f64-429a-4e5d-89bd-e4ab1128eb72","Type":"ContainerStarted","Data":"95cbf8808a1ae7cc97d432db1f67c509029e7e490564b5e4d305c6fbb7eb9404"} Dec 08 17:55:37 crc kubenswrapper[5113]: I1208 17:55:37.659244 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="620ef7f7-9ff0-46da-8684-a4f866a0adc2" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:55:37 crc kubenswrapper[5113]: {"timestamp": "2025-12-08T17:55:37+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:55:37 crc kubenswrapper[5113]: > Dec 08 17:55:42 crc kubenswrapper[5113]: I1208 17:55:42.660117 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="620ef7f7-9ff0-46da-8684-a4f866a0adc2" containerName="elasticsearch" probeResult="failure" output=< Dec 08 17:55:42 crc kubenswrapper[5113]: {"timestamp": "2025-12-08T17:55:42+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 17:55:42 crc kubenswrapper[5113]: > Dec 08 17:55:43 crc kubenswrapper[5113]: I1208 17:55:43.966836 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"ccca3f64-429a-4e5d-89bd-e4ab1128eb72","Type":"ContainerStarted","Data":"aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1"} Dec 08 17:55:43 crc kubenswrapper[5113]: I1208 17:55:43.970133 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"99820e98-44ea-4ad6-ae14-c15472571a3b","Type":"ContainerStarted","Data":"76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541"} Dec 08 17:55:43 crc kubenswrapper[5113]: I1208 17:55:43.970399 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="99820e98-44ea-4ad6-ae14-c15472571a3b" containerName="manage-dockerfile" containerID="cri-o://76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541" gracePeriod=30 Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.055461 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54348: no serving certificate available for the kubelet" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.541473 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_99820e98-44ea-4ad6-ae14-c15472571a3b/manage-dockerfile/0.log" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.541946 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.586953 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.587015 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-node-pullsecrets\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.587199 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-run\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.587618 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.587678 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-ca-bundles\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.588731 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.588810 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-build-blob-cache\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589391 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589504 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-buildcachedir\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589569 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589597 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-buildworkdir\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589623 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-pull\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589648 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-push\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589671 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-root\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589704 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-system-configs\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589762 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-proxy-ca-bundles\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.589797 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dhqs\" (UniqueName: \"kubernetes.io/projected/99820e98-44ea-4ad6-ae14-c15472571a3b-kube-api-access-4dhqs\") pod \"99820e98-44ea-4ad6-ae14-c15472571a3b\" (UID: \"99820e98-44ea-4ad6-ae14-c15472571a3b\") " Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.590266 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.590549 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.590570 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.590579 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.590587 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.590597 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/99820e98-44ea-4ad6-ae14-c15472571a3b-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.590606 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.596477 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.596553 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.596963 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.604202 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-pull" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-pull") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "builder-dockercfg-q2lg2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.604925 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-push" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-push") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "builder-dockercfg-q2lg2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.618162 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99820e98-44ea-4ad6-ae14-c15472571a3b-kube-api-access-4dhqs" (OuterVolumeSpecName: "kube-api-access-4dhqs") pod "99820e98-44ea-4ad6-ae14-c15472571a3b" (UID: "99820e98-44ea-4ad6-ae14-c15472571a3b"). InnerVolumeSpecName "kube-api-access-4dhqs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.693139 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/99820e98-44ea-4ad6-ae14-c15472571a3b-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.693194 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.693217 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/99820e98-44ea-4ad6-ae14-c15472571a3b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.693230 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4dhqs\" (UniqueName: \"kubernetes.io/projected/99820e98-44ea-4ad6-ae14-c15472571a3b-kube-api-access-4dhqs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.693246 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.693265 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/99820e98-44ea-4ad6-ae14-c15472571a3b-builder-dockercfg-q2lg2-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.980908 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_99820e98-44ea-4ad6-ae14-c15472571a3b/manage-dockerfile/0.log" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.981449 5113 generic.go:358] "Generic (PLEG): container finished" podID="99820e98-44ea-4ad6-ae14-c15472571a3b" containerID="76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541" exitCode=1 Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.981620 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.981666 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"99820e98-44ea-4ad6-ae14-c15472571a3b","Type":"ContainerDied","Data":"76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541"} Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.982083 5113 scope.go:117] "RemoveContainer" containerID="76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541" Dec 08 17:55:44 crc kubenswrapper[5113]: I1208 17:55:44.982847 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"99820e98-44ea-4ad6-ae14-c15472571a3b","Type":"ContainerDied","Data":"7811c13f669cafa02c1d00f0ac4905ffec26ff630fd3fc76c6299cdffe45ae5a"} Dec 08 17:55:45 crc kubenswrapper[5113]: I1208 17:55:45.009628 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:45 crc kubenswrapper[5113]: I1208 17:55:45.018363 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Dec 08 17:55:45 crc kubenswrapper[5113]: I1208 17:55:45.100092 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:45 crc kubenswrapper[5113]: I1208 17:55:45.205536 5113 scope.go:117] "RemoveContainer" containerID="76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541" Dec 08 17:55:45 crc kubenswrapper[5113]: E1208 17:55:45.206395 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541\": container with ID starting with 76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541 not found: ID does not exist" containerID="76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541" Dec 08 17:55:45 crc kubenswrapper[5113]: I1208 17:55:45.206472 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541"} err="failed to get container status \"76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541\": rpc error: code = NotFound desc = could not find container \"76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541\": container with ID starting with 76fa3696a0bfdf45e34799ab657c435525463676696c61a9a76285104a687541 not found: ID does not exist" Dec 08 17:55:45 crc kubenswrapper[5113]: I1208 17:55:45.993966 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="ccca3f64-429a-4e5d-89bd-e4ab1128eb72" containerName="git-clone" containerID="cri-o://aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1" gracePeriod=30 Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.451692 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_ccca3f64-429a-4e5d-89bd-e4ab1128eb72/git-clone/0.log" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.452204 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496155 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-node-pullsecrets\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496310 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-blob-cache\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496319 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496440 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-run\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496509 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-root\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496565 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-pull\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496595 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-push\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496877 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildworkdir\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496947 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-kube-api-access-ftw9f\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497008 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-proxy-ca-bundles\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.496985 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497048 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497098 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildcachedir\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497155 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-system-configs\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497217 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-ca-bundles\") pod \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\" (UID: \"ccca3f64-429a-4e5d-89bd-e4ab1128eb72\") " Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497361 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497553 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497568 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.497750 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498139 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498262 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498712 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498736 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498769 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498781 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498793 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498804 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498814 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498823 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.498833 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.507666 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-pull" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-pull") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "builder-dockercfg-q2lg2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.515345 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-push" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-push") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "builder-dockercfg-q2lg2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.522830 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-kube-api-access-ftw9f" (OuterVolumeSpecName: "kube-api-access-ftw9f") pod "ccca3f64-429a-4e5d-89bd-e4ab1128eb72" (UID: "ccca3f64-429a-4e5d-89bd-e4ab1128eb72"). InnerVolumeSpecName "kube-api-access-ftw9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.600210 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.600576 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-builder-dockercfg-q2lg2-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.600646 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftw9f\" (UniqueName: \"kubernetes.io/projected/ccca3f64-429a-4e5d-89bd-e4ab1128eb72-kube-api-access-ftw9f\") on node \"crc\" DevicePath \"\"" Dec 08 17:55:46 crc kubenswrapper[5113]: I1208 17:55:46.689803 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99820e98-44ea-4ad6-ae14-c15472571a3b" path="/var/lib/kubelet/pods/99820e98-44ea-4ad6-ae14-c15472571a3b/volumes" Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.004496 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_ccca3f64-429a-4e5d-89bd-e4ab1128eb72/git-clone/0.log" Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.004568 5113 generic.go:358] "Generic (PLEG): container finished" podID="ccca3f64-429a-4e5d-89bd-e4ab1128eb72" containerID="aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1" exitCode=1 Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.004735 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"ccca3f64-429a-4e5d-89bd-e4ab1128eb72","Type":"ContainerDied","Data":"aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1"} Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.004793 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"ccca3f64-429a-4e5d-89bd-e4ab1128eb72","Type":"ContainerDied","Data":"95cbf8808a1ae7cc97d432db1f67c509029e7e490564b5e4d305c6fbb7eb9404"} Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.004816 5113 scope.go:117] "RemoveContainer" containerID="aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1" Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.005029 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.034844 5113 scope.go:117] "RemoveContainer" containerID="aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1" Dec 08 17:55:47 crc kubenswrapper[5113]: E1208 17:55:47.035659 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1\": container with ID starting with aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1 not found: ID does not exist" containerID="aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1" Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.035724 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1"} err="failed to get container status \"aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1\": rpc error: code = NotFound desc = could not find container \"aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1\": container with ID starting with aa5a7b7a0f85ab80755cb57c2e76becc671fa32e0d56ab9ddca13f741ae239c1 not found: ID does not exist" Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.042753 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:47 crc kubenswrapper[5113]: I1208 17:55:47.049528 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Dec 08 17:55:48 crc kubenswrapper[5113]: I1208 17:55:48.197860 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 17:55:48 crc kubenswrapper[5113]: I1208 17:55:48.691184 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccca3f64-429a-4e5d-89bd-e4ab1128eb72" path="/var/lib/kubelet/pods/ccca3f64-429a-4e5d-89bd-e4ab1128eb72/volumes" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.657975 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.660124 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99820e98-44ea-4ad6-ae14-c15472571a3b" containerName="manage-dockerfile" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.660231 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="99820e98-44ea-4ad6-ae14-c15472571a3b" containerName="manage-dockerfile" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.660304 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ccca3f64-429a-4e5d-89bd-e4ab1128eb72" containerName="git-clone" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.660313 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccca3f64-429a-4e5d-89bd-e4ab1128eb72" containerName="git-clone" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.660518 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="ccca3f64-429a-4e5d-89bd-e4ab1128eb72" containerName="git-clone" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.660545 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="99820e98-44ea-4ad6-ae14-c15472571a3b" containerName="manage-dockerfile" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.669662 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.674070 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-ca\"" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.675419 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-global-ca\"" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.675858 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-q2lg2\"" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.679190 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-3-sys-config\"" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.697492 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.768803 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.768881 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769019 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769136 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769179 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769299 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769345 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769398 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769462 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/5eed39ad-166d-4c94-98f5-64b2d572112d-kube-api-access-n862k\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769502 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769557 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.769576 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870578 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870644 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870696 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870776 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/5eed39ad-166d-4c94-98f5-64b2d572112d-kube-api-access-n862k\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870807 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870855 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870880 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870913 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870946 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.870976 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871016 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871078 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871337 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871432 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871507 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871618 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871618 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871753 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.871918 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.872733 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.872923 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.879321 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.879351 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:56 crc kubenswrapper[5113]: I1208 17:55:56.895193 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/5eed39ad-166d-4c94-98f5-64b2d572112d-kube-api-access-n862k\") pod \"service-telemetry-operator-3-build\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:57 crc kubenswrapper[5113]: I1208 17:55:57.179499 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:55:58 crc kubenswrapper[5113]: I1208 17:55:58.112127 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:55:59 crc kubenswrapper[5113]: I1208 17:55:59.099076 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"5eed39ad-166d-4c94-98f5-64b2d572112d","Type":"ContainerStarted","Data":"349d70b2111c1822253d24cd18d905a5db0278abd51d17c7bdbd9dfd47a441df"} Dec 08 17:56:00 crc kubenswrapper[5113]: I1208 17:56:00.108658 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"5eed39ad-166d-4c94-98f5-64b2d572112d","Type":"ContainerStarted","Data":"ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402"} Dec 08 17:56:00 crc kubenswrapper[5113]: I1208 17:56:00.172493 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54980: no serving certificate available for the kubelet" Dec 08 17:56:01 crc kubenswrapper[5113]: I1208 17:56:01.213445 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.122033 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="5eed39ad-166d-4c94-98f5-64b2d572112d" containerName="git-clone" containerID="cri-o://ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402" gracePeriod=30 Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.639704 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_5eed39ad-166d-4c94-98f5-64b2d572112d/git-clone/0.log" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.640286 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681107 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-buildworkdir\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681151 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/5eed39ad-166d-4c94-98f5-64b2d572112d-kube-api-access-n862k\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681200 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-pull\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681219 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-push\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681305 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-root\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681327 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-buildcachedir\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681346 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-run\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681363 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-node-pullsecrets\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681428 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-system-configs\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681463 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-proxy-ca-bundles\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681504 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-build-blob-cache\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.681527 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-ca-bundles\") pod \"5eed39ad-166d-4c94-98f5-64b2d572112d\" (UID: \"5eed39ad-166d-4c94-98f5-64b2d572112d\") " Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.682269 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.682301 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.682470 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.682718 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.682765 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.683013 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.683062 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.683364 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.683577 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.742911 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-pull" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-pull") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "builder-dockercfg-q2lg2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.743091 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-push" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-push") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "builder-dockercfg-q2lg2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.743685 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eed39ad-166d-4c94-98f5-64b2d572112d-kube-api-access-n862k" (OuterVolumeSpecName: "kube-api-access-n862k") pod "5eed39ad-166d-4c94-98f5-64b2d572112d" (UID: "5eed39ad-166d-4c94-98f5-64b2d572112d"). InnerVolumeSpecName "kube-api-access-n862k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.783910 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.783959 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/5eed39ad-166d-4c94-98f5-64b2d572112d-builder-dockercfg-q2lg2-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.783973 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.783986 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.783997 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.784009 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5eed39ad-166d-4c94-98f5-64b2d572112d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.784022 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.784056 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.784067 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.784078 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5eed39ad-166d-4c94-98f5-64b2d572112d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.784088 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5eed39ad-166d-4c94-98f5-64b2d572112d-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:02 crc kubenswrapper[5113]: I1208 17:56:02.784102 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n862k\" (UniqueName: \"kubernetes.io/projected/5eed39ad-166d-4c94-98f5-64b2d572112d-kube-api-access-n862k\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.130456 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_5eed39ad-166d-4c94-98f5-64b2d572112d/git-clone/0.log" Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.130823 5113 generic.go:358] "Generic (PLEG): container finished" podID="5eed39ad-166d-4c94-98f5-64b2d572112d" containerID="ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402" exitCode=1 Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.130939 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"5eed39ad-166d-4c94-98f5-64b2d572112d","Type":"ContainerDied","Data":"ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402"} Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.131015 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"5eed39ad-166d-4c94-98f5-64b2d572112d","Type":"ContainerDied","Data":"349d70b2111c1822253d24cd18d905a5db0278abd51d17c7bdbd9dfd47a441df"} Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.131066 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.131113 5113 scope.go:117] "RemoveContainer" containerID="ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402" Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.151831 5113 scope.go:117] "RemoveContainer" containerID="ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402" Dec 08 17:56:03 crc kubenswrapper[5113]: E1208 17:56:03.152386 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402\": container with ID starting with ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402 not found: ID does not exist" containerID="ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402" Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.152458 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402"} err="failed to get container status \"ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402\": rpc error: code = NotFound desc = could not find container \"ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402\": container with ID starting with ec9ff5467ccfc62a73967a36f4ef5a11bb81df1e7ff880234fd4c0246cb78402 not found: ID does not exist" Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.172234 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:56:03 crc kubenswrapper[5113]: I1208 17:56:03.182076 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Dec 08 17:56:04 crc kubenswrapper[5113]: I1208 17:56:04.689753 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eed39ad-166d-4c94-98f5-64b2d572112d" path="/var/lib/kubelet/pods/5eed39ad-166d-4c94-98f5-64b2d572112d/volumes" Dec 08 17:56:12 crc kubenswrapper[5113]: I1208 17:56:12.671251 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:56:12 crc kubenswrapper[5113]: I1208 17:56:12.672773 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5eed39ad-166d-4c94-98f5-64b2d572112d" containerName="git-clone" Dec 08 17:56:12 crc kubenswrapper[5113]: I1208 17:56:12.672790 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eed39ad-166d-4c94-98f5-64b2d572112d" containerName="git-clone" Dec 08 17:56:12 crc kubenswrapper[5113]: I1208 17:56:12.672928 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="5eed39ad-166d-4c94-98f5-64b2d572112d" containerName="git-clone" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.579901 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.580533 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.584568 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-ca\"" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.584568 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-q2lg2\"" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.584885 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-global-ca\"" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.585190 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-4-sys-config\"" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.656156 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.656736 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.656765 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.656865 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.656886 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.656950 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.657005 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.657070 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.657095 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.657150 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.657183 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.657203 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcqg\" (UniqueName: \"kubernetes.io/projected/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-kube-api-access-mmcqg\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758425 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758488 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758518 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758536 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758556 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758616 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758751 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758780 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcqg\" (UniqueName: \"kubernetes.io/projected/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-kube-api-access-mmcqg\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.758974 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759105 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759148 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759168 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759227 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759285 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759327 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759426 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759567 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759794 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759957 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.759976 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.760943 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.767495 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.767924 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.785837 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcqg\" (UniqueName: \"kubernetes.io/projected/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-kube-api-access-mmcqg\") pod \"service-telemetry-operator-4-build\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:13 crc kubenswrapper[5113]: I1208 17:56:13.910126 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:14 crc kubenswrapper[5113]: I1208 17:56:14.427750 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:56:15 crc kubenswrapper[5113]: I1208 17:56:15.229660 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b","Type":"ContainerStarted","Data":"44f04108ca7e9c94c755eac6fb1bfb49f05589be8d3753e2352123b555b0c1b3"} Dec 08 17:56:16 crc kubenswrapper[5113]: I1208 17:56:16.239547 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b","Type":"ContainerStarted","Data":"3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24"} Dec 08 17:56:16 crc kubenswrapper[5113]: I1208 17:56:16.299131 5113 ???:1] "http: TLS handshake error from 192.168.126.11:50952: no serving certificate available for the kubelet" Dec 08 17:56:17 crc kubenswrapper[5113]: I1208 17:56:17.336737 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.254876 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" containerName="git-clone" containerID="cri-o://3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24" gracePeriod=30 Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.704540 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_d19ac2b7-4ed0-4f81-a7c0-00724504bb8b/git-clone/0.log" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.704625 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.849568 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-node-pullsecrets\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850113 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-proxy-ca-bundles\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850165 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildcachedir\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850224 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-blob-cache\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850284 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-system-configs\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850320 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-run\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850345 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-push\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850374 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildworkdir\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850411 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-root\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850439 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-pull\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850555 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-ca-bundles\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850597 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmcqg\" (UniqueName: \"kubernetes.io/projected/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-kube-api-access-mmcqg\") pod \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\" (UID: \"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b\") " Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.849703 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.850843 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851196 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851427 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851579 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851606 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851620 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851541 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851702 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.851863 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.852168 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.852296 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.858687 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-kube-api-access-mmcqg" (OuterVolumeSpecName: "kube-api-access-mmcqg") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "kube-api-access-mmcqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.858939 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-push" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-push") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "builder-dockercfg-q2lg2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.860279 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-pull" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-pull") pod "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" (UID: "d19ac2b7-4ed0-4f81-a7c0-00724504bb8b"). InnerVolumeSpecName "builder-dockercfg-q2lg2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952644 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952680 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952691 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952701 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952715 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952727 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952739 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-builder-dockercfg-q2lg2-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952749 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:18 crc kubenswrapper[5113]: I1208 17:56:18.952759 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mmcqg\" (UniqueName: \"kubernetes.io/projected/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b-kube-api-access-mmcqg\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.265411 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_d19ac2b7-4ed0-4f81-a7c0-00724504bb8b/git-clone/0.log" Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.265479 5113 generic.go:358] "Generic (PLEG): container finished" podID="d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" containerID="3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24" exitCode=1 Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.265636 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.265638 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b","Type":"ContainerDied","Data":"3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24"} Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.265724 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"d19ac2b7-4ed0-4f81-a7c0-00724504bb8b","Type":"ContainerDied","Data":"44f04108ca7e9c94c755eac6fb1bfb49f05589be8d3753e2352123b555b0c1b3"} Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.265754 5113 scope.go:117] "RemoveContainer" containerID="3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24" Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.306758 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.313714 5113 scope.go:117] "RemoveContainer" containerID="3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24" Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.313981 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Dec 08 17:56:19 crc kubenswrapper[5113]: E1208 17:56:19.314618 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24\": container with ID starting with 3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24 not found: ID does not exist" containerID="3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24" Dec 08 17:56:19 crc kubenswrapper[5113]: I1208 17:56:19.314671 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24"} err="failed to get container status \"3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24\": rpc error: code = NotFound desc = could not find container \"3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24\": container with ID starting with 3027e78f4dd1fa525c04ef86915b814848dd3e99e0c41d06653dcf1e5178de24 not found: ID does not exist" Dec 08 17:56:20 crc kubenswrapper[5113]: I1208 17:56:20.690167 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" path="/var/lib/kubelet/pods/d19ac2b7-4ed0-4f81-a7c0-00724504bb8b/volumes" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.755032 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.756412 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" containerName="git-clone" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.756429 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" containerName="git-clone" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.756536 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="d19ac2b7-4ed0-4f81-a7c0-00724504bb8b" containerName="git-clone" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.770196 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.772159 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.773414 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-q2lg2\"" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.773423 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-ca\"" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.773476 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-global-ca\"" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.773424 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-5-sys-config\"" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.803184 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.803571 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.803726 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.803896 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.804081 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.804236 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.804419 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.804542 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.804677 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.804786 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.804915 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh5j9\" (UniqueName: \"kubernetes.io/projected/90e33a2f-513d-483e-be03-87c8e09edc00-kube-api-access-dh5j9\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.805300 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907680 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907744 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907766 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907793 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907827 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907849 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907876 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907913 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907933 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907958 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.907975 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.908006 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dh5j9\" (UniqueName: \"kubernetes.io/projected/90e33a2f-513d-483e-be03-87c8e09edc00-kube-api-access-dh5j9\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.908906 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.909132 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.909191 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.909233 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.909601 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.909692 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.910157 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.910309 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.910367 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.918360 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-push\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.918402 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:28 crc kubenswrapper[5113]: I1208 17:56:28.928775 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh5j9\" (UniqueName: \"kubernetes.io/projected/90e33a2f-513d-483e-be03-87c8e09edc00-kube-api-access-dh5j9\") pod \"service-telemetry-operator-5-build\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:29 crc kubenswrapper[5113]: I1208 17:56:29.092302 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:29 crc kubenswrapper[5113]: I1208 17:56:29.332519 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:56:29 crc kubenswrapper[5113]: I1208 17:56:29.344977 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 17:56:29 crc kubenswrapper[5113]: I1208 17:56:29.352843 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"90e33a2f-513d-483e-be03-87c8e09edc00","Type":"ContainerStarted","Data":"6ff7e12d19fdf90017629353d7094f537c9897cb9f198ad91b884131e923d9a0"} Dec 08 17:56:30 crc kubenswrapper[5113]: I1208 17:56:30.361901 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"90e33a2f-513d-483e-be03-87c8e09edc00","Type":"ContainerStarted","Data":"9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab"} Dec 08 17:56:30 crc kubenswrapper[5113]: I1208 17:56:30.423948 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36538: no serving certificate available for the kubelet" Dec 08 17:56:31 crc kubenswrapper[5113]: I1208 17:56:31.460191 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.378537 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="90e33a2f-513d-483e-be03-87c8e09edc00" containerName="git-clone" containerID="cri-o://9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab" gracePeriod=30 Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.813998 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_90e33a2f-513d-483e-be03-87c8e09edc00/git-clone/0.log" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.814966 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.877528 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh5j9\" (UniqueName: \"kubernetes.io/projected/90e33a2f-513d-483e-be03-87c8e09edc00-kube-api-access-dh5j9\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.878508 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-ca-bundles\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.879163 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.879402 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-proxy-ca-bundles\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.879561 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-buildworkdir\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.879775 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880028 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.879999 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-node-pullsecrets\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880137 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880310 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-pull\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880492 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-run\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880530 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-push\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880588 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-buildcachedir\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880629 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-build-blob-cache\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880744 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-root\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.880795 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-system-configs\") pod \"90e33a2f-513d-483e-be03-87c8e09edc00\" (UID: \"90e33a2f-513d-483e-be03-87c8e09edc00\") " Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.881143 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.881423 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.881691 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.881800 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.881814 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.881876 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.882179 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.882284 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.882380 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.882517 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.882608 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90e33a2f-513d-483e-be03-87c8e09edc00-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.882729 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.886953 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-push" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-push") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "builder-dockercfg-q2lg2-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.887229 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e33a2f-513d-483e-be03-87c8e09edc00-kube-api-access-dh5j9" (OuterVolumeSpecName: "kube-api-access-dh5j9") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "kube-api-access-dh5j9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.888551 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-pull" (OuterVolumeSpecName: "builder-dockercfg-q2lg2-pull") pod "90e33a2f-513d-483e-be03-87c8e09edc00" (UID: "90e33a2f-513d-483e-be03-87c8e09edc00"). InnerVolumeSpecName "builder-dockercfg-q2lg2-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.984333 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-pull\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-pull\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.984410 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/90e33a2f-513d-483e-be03-87c8e09edc00-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.984430 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-q2lg2-push\" (UniqueName: \"kubernetes.io/secret/90e33a2f-513d-483e-be03-87c8e09edc00-builder-dockercfg-q2lg2-push\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.984445 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/90e33a2f-513d-483e-be03-87c8e09edc00-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:32 crc kubenswrapper[5113]: I1208 17:56:32.984457 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dh5j9\" (UniqueName: \"kubernetes.io/projected/90e33a2f-513d-483e-be03-87c8e09edc00-kube-api-access-dh5j9\") on node \"crc\" DevicePath \"\"" Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.393160 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_90e33a2f-513d-483e-be03-87c8e09edc00/git-clone/0.log" Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.393230 5113 generic.go:358] "Generic (PLEG): container finished" podID="90e33a2f-513d-483e-be03-87c8e09edc00" containerID="9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab" exitCode=1 Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.393298 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"90e33a2f-513d-483e-be03-87c8e09edc00","Type":"ContainerDied","Data":"9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab"} Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.393352 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.393378 5113 scope.go:117] "RemoveContainer" containerID="9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab" Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.393359 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"90e33a2f-513d-483e-be03-87c8e09edc00","Type":"ContainerDied","Data":"6ff7e12d19fdf90017629353d7094f537c9897cb9f198ad91b884131e923d9a0"} Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.431266 5113 scope.go:117] "RemoveContainer" containerID="9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab" Dec 08 17:56:33 crc kubenswrapper[5113]: E1208 17:56:33.432062 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab\": container with ID starting with 9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab not found: ID does not exist" containerID="9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab" Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.432145 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab"} err="failed to get container status \"9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab\": rpc error: code = NotFound desc = could not find container \"9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab\": container with ID starting with 9259713d612050a959ff6de8f18b670fe37ea2ecabcf569d16cd36757ddb8dab not found: ID does not exist" Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.444292 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:56:33 crc kubenswrapper[5113]: I1208 17:56:33.451637 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Dec 08 17:56:34 crc kubenswrapper[5113]: I1208 17:56:34.694397 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e33a2f-513d-483e-be03-87c8e09edc00" path="/var/lib/kubelet/pods/90e33a2f-513d-483e-be03-87c8e09edc00/volumes" Dec 08 17:56:53 crc kubenswrapper[5113]: I1208 17:56:53.256255 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:56:53 crc kubenswrapper[5113]: I1208 17:56:53.257006 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.642174 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-whc2r/must-gather-hzt9l"] Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.643884 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90e33a2f-513d-483e-be03-87c8e09edc00" containerName="git-clone" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.643901 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e33a2f-513d-483e-be03-87c8e09edc00" containerName="git-clone" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.644028 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="90e33a2f-513d-483e-be03-87c8e09edc00" containerName="git-clone" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.653698 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.656137 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-whc2r\"/\"openshift-service-ca.crt\"" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.657286 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-whc2r\"/\"default-dockercfg-z8sv9\"" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.657477 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-whc2r\"/\"kube-root-ca.crt\"" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.657982 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-whc2r/must-gather-hzt9l"] Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.778603 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fde9d4f2-5c5a-4552-a060-0832bfef0bff-must-gather-output\") pod \"must-gather-hzt9l\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.779008 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48g8j\" (UniqueName: \"kubernetes.io/projected/fde9d4f2-5c5a-4552-a060-0832bfef0bff-kube-api-access-48g8j\") pod \"must-gather-hzt9l\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.881102 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fde9d4f2-5c5a-4552-a060-0832bfef0bff-must-gather-output\") pod \"must-gather-hzt9l\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.881225 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-48g8j\" (UniqueName: \"kubernetes.io/projected/fde9d4f2-5c5a-4552-a060-0832bfef0bff-kube-api-access-48g8j\") pod \"must-gather-hzt9l\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.881638 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fde9d4f2-5c5a-4552-a060-0832bfef0bff-must-gather-output\") pod \"must-gather-hzt9l\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.903860 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-48g8j\" (UniqueName: \"kubernetes.io/projected/fde9d4f2-5c5a-4552-a060-0832bfef0bff-kube-api-access-48g8j\") pod \"must-gather-hzt9l\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:09 crc kubenswrapper[5113]: I1208 17:57:09.974447 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:57:10 crc kubenswrapper[5113]: I1208 17:57:10.236988 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-whc2r/must-gather-hzt9l"] Dec 08 17:57:10 crc kubenswrapper[5113]: I1208 17:57:10.697598 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whc2r/must-gather-hzt9l" event={"ID":"fde9d4f2-5c5a-4552-a060-0832bfef0bff","Type":"ContainerStarted","Data":"878645a562f75a6d8f46bc6c1dc29698e8e200d39360af65b4d73a14645eceab"} Dec 08 17:57:14 crc kubenswrapper[5113]: E1208 17:57:14.774358 5113 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 17:57:16 crc kubenswrapper[5113]: I1208 17:57:16.960622 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 17:57:16 crc kubenswrapper[5113]: I1208 17:57:16.974386 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 17:57:17 crc kubenswrapper[5113]: I1208 17:57:17.005670 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54132: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5113]: I1208 17:57:17.038750 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54144: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5113]: I1208 17:57:17.075114 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54150: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5113]: I1208 17:57:17.121801 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54156: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5113]: I1208 17:57:17.188340 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54160: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5113]: I1208 17:57:17.451346 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54172: no serving certificate available for the kubelet" Dec 08 17:57:17 crc kubenswrapper[5113]: I1208 17:57:17.653994 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54180: no serving certificate available for the kubelet" Dec 08 17:57:18 crc kubenswrapper[5113]: I1208 17:57:18.008678 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54188: no serving certificate available for the kubelet" Dec 08 17:57:18 crc kubenswrapper[5113]: I1208 17:57:18.677561 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54202: no serving certificate available for the kubelet" Dec 08 17:57:18 crc kubenswrapper[5113]: I1208 17:57:18.880870 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whc2r/must-gather-hzt9l" event={"ID":"fde9d4f2-5c5a-4552-a060-0832bfef0bff","Type":"ContainerStarted","Data":"6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e"} Dec 08 17:57:18 crc kubenswrapper[5113]: I1208 17:57:18.880945 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whc2r/must-gather-hzt9l" event={"ID":"fde9d4f2-5c5a-4552-a060-0832bfef0bff","Type":"ContainerStarted","Data":"48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d"} Dec 08 17:57:18 crc kubenswrapper[5113]: I1208 17:57:18.908534 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-whc2r/must-gather-hzt9l" podStartSLOduration=1.974428626 podStartE2EDuration="9.908497289s" podCreationTimestamp="2025-12-08 17:57:09 +0000 UTC" firstStartedPulling="2025-12-08 17:57:10.235634851 +0000 UTC m=+995.951427967" lastFinishedPulling="2025-12-08 17:57:18.169703514 +0000 UTC m=+1003.885496630" observedRunningTime="2025-12-08 17:57:18.902871385 +0000 UTC m=+1004.618664501" watchObservedRunningTime="2025-12-08 17:57:18.908497289 +0000 UTC m=+1004.624290405" Dec 08 17:57:19 crc kubenswrapper[5113]: I1208 17:57:19.628278 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54210: no serving certificate available for the kubelet" Dec 08 17:57:19 crc kubenswrapper[5113]: I1208 17:57:19.994393 5113 ???:1] "http: TLS handshake error from 192.168.126.11:54226: no serving certificate available for the kubelet" Dec 08 17:57:22 crc kubenswrapper[5113]: I1208 17:57:22.580386 5113 ???:1] "http: TLS handshake error from 192.168.126.11:58658: no serving certificate available for the kubelet" Dec 08 17:57:23 crc kubenswrapper[5113]: I1208 17:57:23.256159 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:57:23 crc kubenswrapper[5113]: I1208 17:57:23.256286 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:57:27 crc kubenswrapper[5113]: I1208 17:57:27.733296 5113 ???:1] "http: TLS handshake error from 192.168.126.11:58672: no serving certificate available for the kubelet" Dec 08 17:57:38 crc kubenswrapper[5113]: I1208 17:57:38.008879 5113 ???:1] "http: TLS handshake error from 192.168.126.11:48444: no serving certificate available for the kubelet" Dec 08 17:57:53 crc kubenswrapper[5113]: I1208 17:57:53.256159 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 17:57:53 crc kubenswrapper[5113]: I1208 17:57:53.257071 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 17:57:53 crc kubenswrapper[5113]: I1208 17:57:53.257139 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 17:57:53 crc kubenswrapper[5113]: I1208 17:57:53.257906 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13d2c1fe38ff6a7a0cac1ade14681ccc0e31e7fbc1ba06630b2782faab18303e"} pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 17:57:53 crc kubenswrapper[5113]: I1208 17:57:53.257972 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" containerID="cri-o://13d2c1fe38ff6a7a0cac1ade14681ccc0e31e7fbc1ba06630b2782faab18303e" gracePeriod=600 Dec 08 17:57:54 crc kubenswrapper[5113]: I1208 17:57:54.134319 5113 generic.go:358] "Generic (PLEG): container finished" podID="52658507-b084-49cb-a694-f012d44ccc82" containerID="13d2c1fe38ff6a7a0cac1ade14681ccc0e31e7fbc1ba06630b2782faab18303e" exitCode=0 Dec 08 17:57:54 crc kubenswrapper[5113]: I1208 17:57:54.134379 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerDied","Data":"13d2c1fe38ff6a7a0cac1ade14681ccc0e31e7fbc1ba06630b2782faab18303e"} Dec 08 17:57:54 crc kubenswrapper[5113]: I1208 17:57:54.134771 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"b774cdd68266b83e8cd6eb707785fad3a39cb5fbfd46ce5927fadc5a78e9b66b"} Dec 08 17:57:54 crc kubenswrapper[5113]: I1208 17:57:54.134798 5113 scope.go:117] "RemoveContainer" containerID="e91e9c2d7b1e37ebd3bc5750a4f89f644abb6b97e12e01ad60b986cb9a1422b5" Dec 08 17:57:58 crc kubenswrapper[5113]: I1208 17:57:58.522008 5113 ???:1] "http: TLS handshake error from 192.168.126.11:33926: no serving certificate available for the kubelet" Dec 08 17:58:02 crc kubenswrapper[5113]: I1208 17:58:02.023618 5113 ???:1] "http: TLS handshake error from 192.168.126.11:33938: no serving certificate available for the kubelet" Dec 08 17:58:02 crc kubenswrapper[5113]: I1208 17:58:02.213089 5113 ???:1] "http: TLS handshake error from 192.168.126.11:33942: no serving certificate available for the kubelet" Dec 08 17:58:02 crc kubenswrapper[5113]: I1208 17:58:02.220959 5113 ???:1] "http: TLS handshake error from 192.168.126.11:33946: no serving certificate available for the kubelet" Dec 08 17:58:15 crc kubenswrapper[5113]: I1208 17:58:15.201374 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36818: no serving certificate available for the kubelet" Dec 08 17:58:15 crc kubenswrapper[5113]: I1208 17:58:15.393624 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36820: no serving certificate available for the kubelet" Dec 08 17:58:15 crc kubenswrapper[5113]: I1208 17:58:15.399766 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36824: no serving certificate available for the kubelet" Dec 08 17:58:33 crc kubenswrapper[5113]: I1208 17:58:33.880237 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38506: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.103433 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38512: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.122006 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38522: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.145748 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38530: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.316072 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38546: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.356346 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38552: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.357196 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38548: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.536562 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38560: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.709654 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38562: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.710798 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38570: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.783602 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38574: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.945406 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38584: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.945800 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38588: no serving certificate available for the kubelet" Dec 08 17:58:34 crc kubenswrapper[5113]: I1208 17:58:34.966796 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38596: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.151528 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38612: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.289459 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38616: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.315602 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38622: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.378461 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38624: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.512190 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38630: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.529895 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38642: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.554575 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38658: no serving certificate available for the kubelet" Dec 08 17:58:35 crc kubenswrapper[5113]: I1208 17:58:35.719544 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38660: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.239577 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38662: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.252913 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38664: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.268280 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38674: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.441573 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38686: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.475878 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38696: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.481484 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38698: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.638749 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38714: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.852297 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38724: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.888525 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38736: no serving certificate available for the kubelet" Dec 08 17:58:36 crc kubenswrapper[5113]: I1208 17:58:36.922524 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38742: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.046697 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38750: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.074553 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38754: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.099413 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38760: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.236479 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38768: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.397157 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38776: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.416235 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38780: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.741220 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38788: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.984759 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38798: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.989124 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38800: no serving certificate available for the kubelet" Dec 08 17:58:37 crc kubenswrapper[5113]: I1208 17:58:37.999831 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38808: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.005299 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38816: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.188534 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38820: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.367631 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38834: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.389568 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38850: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.403846 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38864: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.574500 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38872: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.603745 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38882: no serving certificate available for the kubelet" Dec 08 17:58:38 crc kubenswrapper[5113]: I1208 17:58:38.621671 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38896: no serving certificate available for the kubelet" Dec 08 17:58:39 crc kubenswrapper[5113]: I1208 17:58:39.520422 5113 ???:1] "http: TLS handshake error from 192.168.126.11:38906: no serving certificate available for the kubelet" Dec 08 17:58:51 crc kubenswrapper[5113]: I1208 17:58:51.849288 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55616: no serving certificate available for the kubelet" Dec 08 17:58:52 crc kubenswrapper[5113]: I1208 17:58:52.033210 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55618: no serving certificate available for the kubelet" Dec 08 17:58:52 crc kubenswrapper[5113]: I1208 17:58:52.064957 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55626: no serving certificate available for the kubelet" Dec 08 17:58:52 crc kubenswrapper[5113]: I1208 17:58:52.223525 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55628: no serving certificate available for the kubelet" Dec 08 17:58:52 crc kubenswrapper[5113]: I1208 17:58:52.296964 5113 ???:1] "http: TLS handshake error from 192.168.126.11:55640: no serving certificate available for the kubelet" Dec 08 17:59:41 crc kubenswrapper[5113]: I1208 17:59:41.151275 5113 generic.go:358] "Generic (PLEG): container finished" podID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerID="48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d" exitCode=0 Dec 08 17:59:41 crc kubenswrapper[5113]: I1208 17:59:41.151367 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-whc2r/must-gather-hzt9l" event={"ID":"fde9d4f2-5c5a-4552-a060-0832bfef0bff","Type":"ContainerDied","Data":"48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d"} Dec 08 17:59:41 crc kubenswrapper[5113]: I1208 17:59:41.152899 5113 scope.go:117] "RemoveContainer" containerID="48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d" Dec 08 17:59:47 crc kubenswrapper[5113]: I1208 17:59:47.577752 5113 scope.go:117] "RemoveContainer" containerID="7e5efc8a2952e2e8af2eac282bddc26b7afc51b372744d30a05c5cc4fb96d3b2" Dec 08 17:59:47 crc kubenswrapper[5113]: I1208 17:59:47.608981 5113 scope.go:117] "RemoveContainer" containerID="575a37d8b91a6d11e7c59291962b338893a4a6b131f1cdb00c2463d57d92db80" Dec 08 17:59:47 crc kubenswrapper[5113]: I1208 17:59:47.638832 5113 scope.go:117] "RemoveContainer" containerID="bc5ca72d7a4fe4364f7a566c5383d4f94b45e82b5117f15e3ec73be8b078d1cb" Dec 08 17:59:49 crc kubenswrapper[5113]: I1208 17:59:49.779374 5113 ???:1] "http: TLS handshake error from 192.168.126.11:40920: no serving certificate available for the kubelet" Dec 08 17:59:49 crc kubenswrapper[5113]: I1208 17:59:49.925693 5113 ???:1] "http: TLS handshake error from 192.168.126.11:40932: no serving certificate available for the kubelet" Dec 08 17:59:49 crc kubenswrapper[5113]: I1208 17:59:49.936258 5113 ???:1] "http: TLS handshake error from 192.168.126.11:40936: no serving certificate available for the kubelet" Dec 08 17:59:49 crc kubenswrapper[5113]: I1208 17:59:49.957186 5113 ???:1] "http: TLS handshake error from 192.168.126.11:40950: no serving certificate available for the kubelet" Dec 08 17:59:49 crc kubenswrapper[5113]: I1208 17:59:49.969912 5113 ???:1] "http: TLS handshake error from 192.168.126.11:40964: no serving certificate available for the kubelet" Dec 08 17:59:49 crc kubenswrapper[5113]: I1208 17:59:49.985945 5113 ???:1] "http: TLS handshake error from 192.168.126.11:40980: no serving certificate available for the kubelet" Dec 08 17:59:49 crc kubenswrapper[5113]: I1208 17:59:49.998688 5113 ???:1] "http: TLS handshake error from 192.168.126.11:40996: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.015049 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41000: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.029682 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41014: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.168556 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41030: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.181295 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41034: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.205858 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41048: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.216891 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41062: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.232469 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41072: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.242820 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41086: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.256573 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41092: no serving certificate available for the kubelet" Dec 08 17:59:50 crc kubenswrapper[5113]: I1208 17:59:50.267505 5113 ???:1] "http: TLS handshake error from 192.168.126.11:41104: no serving certificate available for the kubelet" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.308792 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-whc2r/must-gather-hzt9l"] Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.309521 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-whc2r/must-gather-hzt9l" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerName="copy" containerID="cri-o://6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e" gracePeriod=2 Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.313078 5113 status_manager.go:895] "Failed to get status for pod" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" pod="openshift-must-gather-whc2r/must-gather-hzt9l" err="pods \"must-gather-hzt9l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-whc2r\": no relationship found between node 'crc' and this object" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.315643 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-whc2r/must-gather-hzt9l"] Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.814131 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-whc2r_must-gather-hzt9l_fde9d4f2-5c5a-4552-a060-0832bfef0bff/copy/0.log" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.815057 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.816715 5113 status_manager.go:895] "Failed to get status for pod" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" pod="openshift-must-gather-whc2r/must-gather-hzt9l" err="pods \"must-gather-hzt9l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-whc2r\": no relationship found between node 'crc' and this object" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.863197 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48g8j\" (UniqueName: \"kubernetes.io/projected/fde9d4f2-5c5a-4552-a060-0832bfef0bff-kube-api-access-48g8j\") pod \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.863290 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fde9d4f2-5c5a-4552-a060-0832bfef0bff-must-gather-output\") pod \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\" (UID: \"fde9d4f2-5c5a-4552-a060-0832bfef0bff\") " Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.871235 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde9d4f2-5c5a-4552-a060-0832bfef0bff-kube-api-access-48g8j" (OuterVolumeSpecName: "kube-api-access-48g8j") pod "fde9d4f2-5c5a-4552-a060-0832bfef0bff" (UID: "fde9d4f2-5c5a-4552-a060-0832bfef0bff"). InnerVolumeSpecName "kube-api-access-48g8j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.917096 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde9d4f2-5c5a-4552-a060-0832bfef0bff-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fde9d4f2-5c5a-4552-a060-0832bfef0bff" (UID: "fde9d4f2-5c5a-4552-a060-0832bfef0bff"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.964698 5113 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fde9d4f2-5c5a-4552-a060-0832bfef0bff-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 17:59:55 crc kubenswrapper[5113]: I1208 17:59:55.964741 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-48g8j\" (UniqueName: \"kubernetes.io/projected/fde9d4f2-5c5a-4552-a060-0832bfef0bff-kube-api-access-48g8j\") on node \"crc\" DevicePath \"\"" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.268923 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-whc2r_must-gather-hzt9l_fde9d4f2-5c5a-4552-a060-0832bfef0bff/copy/0.log" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.270285 5113 generic.go:358] "Generic (PLEG): container finished" podID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerID="6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e" exitCode=143 Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.270376 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-whc2r/must-gather-hzt9l" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.270379 5113 scope.go:117] "RemoveContainer" containerID="6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.272327 5113 status_manager.go:895] "Failed to get status for pod" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" pod="openshift-must-gather-whc2r/must-gather-hzt9l" err="pods \"must-gather-hzt9l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-whc2r\": no relationship found between node 'crc' and this object" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.287797 5113 status_manager.go:895] "Failed to get status for pod" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" pod="openshift-must-gather-whc2r/must-gather-hzt9l" err="pods \"must-gather-hzt9l\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-whc2r\": no relationship found between node 'crc' and this object" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.297462 5113 scope.go:117] "RemoveContainer" containerID="48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.372626 5113 scope.go:117] "RemoveContainer" containerID="6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e" Dec 08 17:59:56 crc kubenswrapper[5113]: E1208 17:59:56.373220 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e\": container with ID starting with 6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e not found: ID does not exist" containerID="6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.373256 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e"} err="failed to get container status \"6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e\": rpc error: code = NotFound desc = could not find container \"6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e\": container with ID starting with 6bc7d7d5986ca3eb90ea5005c19d32ec892038ee82ec25713293d7527c11a42e not found: ID does not exist" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.373280 5113 scope.go:117] "RemoveContainer" containerID="48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d" Dec 08 17:59:56 crc kubenswrapper[5113]: E1208 17:59:56.373680 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d\": container with ID starting with 48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d not found: ID does not exist" containerID="48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.373708 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d"} err="failed to get container status \"48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d\": rpc error: code = NotFound desc = could not find container \"48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d\": container with ID starting with 48b1858bd4d739920db44463bb1b08b9dbe852b0151f87cb598abe0d0781665d not found: ID does not exist" Dec 08 17:59:56 crc kubenswrapper[5113]: I1208 17:59:56.689154 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" path="/var/lib/kubelet/pods/fde9d4f2-5c5a-4552-a060-0832bfef0bff/volumes" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.146293 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj"] Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.148077 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerName="gather" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.148097 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerName="gather" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.148133 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerName="copy" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.148138 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerName="copy" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.148275 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerName="gather" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.148289 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="fde9d4f2-5c5a-4552-a060-0832bfef0bff" containerName="copy" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.169338 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj"] Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.169634 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.181291 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.190004 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.229680 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce5651d-c6b1-4971-85e9-670eb49be18f-config-volume\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.229768 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce5651d-c6b1-4971-85e9-670eb49be18f-secret-volume\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.229957 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvqn\" (UniqueName: \"kubernetes.io/projected/cce5651d-c6b1-4971-85e9-670eb49be18f-kube-api-access-hgvqn\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.331448 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce5651d-c6b1-4971-85e9-670eb49be18f-secret-volume\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.331529 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvqn\" (UniqueName: \"kubernetes.io/projected/cce5651d-c6b1-4971-85e9-670eb49be18f-kube-api-access-hgvqn\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.331649 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce5651d-c6b1-4971-85e9-670eb49be18f-config-volume\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.332823 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce5651d-c6b1-4971-85e9-670eb49be18f-config-volume\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.343191 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce5651d-c6b1-4971-85e9-670eb49be18f-secret-volume\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.349958 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvqn\" (UniqueName: \"kubernetes.io/projected/cce5651d-c6b1-4971-85e9-670eb49be18f-kube-api-access-hgvqn\") pod \"collect-profiles-29420280-st6fj\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.494694 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:00 crc kubenswrapper[5113]: I1208 18:00:00.730467 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj"] Dec 08 18:00:01 crc kubenswrapper[5113]: I1208 18:00:01.312941 5113 generic.go:358] "Generic (PLEG): container finished" podID="cce5651d-c6b1-4971-85e9-670eb49be18f" containerID="78cbfee345133cca47fa8b57b695559d53bec6af175b098642d9187a158375b9" exitCode=0 Dec 08 18:00:01 crc kubenswrapper[5113]: I1208 18:00:01.313119 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" event={"ID":"cce5651d-c6b1-4971-85e9-670eb49be18f","Type":"ContainerDied","Data":"78cbfee345133cca47fa8b57b695559d53bec6af175b098642d9187a158375b9"} Dec 08 18:00:01 crc kubenswrapper[5113]: I1208 18:00:01.313423 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" event={"ID":"cce5651d-c6b1-4971-85e9-670eb49be18f","Type":"ContainerStarted","Data":"865b52d997d537fe496971610cab0ee6d43ac14fa367bfa9a33867ce1c5ee906"} Dec 08 18:00:01 crc kubenswrapper[5113]: I1208 18:00:01.476809 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36932: no serving certificate available for the kubelet" Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.587509 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.669579 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce5651d-c6b1-4971-85e9-670eb49be18f-config-volume\") pod \"cce5651d-c6b1-4971-85e9-670eb49be18f\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.669744 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgvqn\" (UniqueName: \"kubernetes.io/projected/cce5651d-c6b1-4971-85e9-670eb49be18f-kube-api-access-hgvqn\") pod \"cce5651d-c6b1-4971-85e9-670eb49be18f\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.669927 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce5651d-c6b1-4971-85e9-670eb49be18f-secret-volume\") pod \"cce5651d-c6b1-4971-85e9-670eb49be18f\" (UID: \"cce5651d-c6b1-4971-85e9-670eb49be18f\") " Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.670912 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce5651d-c6b1-4971-85e9-670eb49be18f-config-volume" (OuterVolumeSpecName: "config-volume") pod "cce5651d-c6b1-4971-85e9-670eb49be18f" (UID: "cce5651d-c6b1-4971-85e9-670eb49be18f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.687348 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce5651d-c6b1-4971-85e9-670eb49be18f-kube-api-access-hgvqn" (OuterVolumeSpecName: "kube-api-access-hgvqn") pod "cce5651d-c6b1-4971-85e9-670eb49be18f" (UID: "cce5651d-c6b1-4971-85e9-670eb49be18f"). InnerVolumeSpecName "kube-api-access-hgvqn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.687375 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cce5651d-c6b1-4971-85e9-670eb49be18f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cce5651d-c6b1-4971-85e9-670eb49be18f" (UID: "cce5651d-c6b1-4971-85e9-670eb49be18f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.773366 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cce5651d-c6b1-4971-85e9-670eb49be18f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.773413 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cce5651d-c6b1-4971-85e9-670eb49be18f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:02 crc kubenswrapper[5113]: I1208 18:00:02.773424 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgvqn\" (UniqueName: \"kubernetes.io/projected/cce5651d-c6b1-4971-85e9-670eb49be18f-kube-api-access-hgvqn\") on node \"crc\" DevicePath \"\"" Dec 08 18:00:03 crc kubenswrapper[5113]: I1208 18:00:03.329829 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" event={"ID":"cce5651d-c6b1-4971-85e9-670eb49be18f","Type":"ContainerDied","Data":"865b52d997d537fe496971610cab0ee6d43ac14fa367bfa9a33867ce1c5ee906"} Dec 08 18:00:03 crc kubenswrapper[5113]: I1208 18:00:03.329888 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865b52d997d537fe496971610cab0ee6d43ac14fa367bfa9a33867ce1c5ee906" Dec 08 18:00:03 crc kubenswrapper[5113]: I1208 18:00:03.329919 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420280-st6fj" Dec 08 18:00:23 crc kubenswrapper[5113]: I1208 18:00:23.256164 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:00:23 crc kubenswrapper[5113]: I1208 18:00:23.257080 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:00:35 crc kubenswrapper[5113]: I1208 18:00:35.543088 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g9mkp_c4621882-3d98-4910-9263-5959d2302427/kube-multus/0.log" Dec 08 18:00:35 crc kubenswrapper[5113]: I1208 18:00:35.546534 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 18:00:35 crc kubenswrapper[5113]: I1208 18:00:35.555299 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g9mkp_c4621882-3d98-4910-9263-5959d2302427/kube-multus/0.log" Dec 08 18:00:35 crc kubenswrapper[5113]: I1208 18:00:35.557727 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/1.log" Dec 08 18:00:53 crc kubenswrapper[5113]: I1208 18:00:53.667052 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:00:53 crc kubenswrapper[5113]: I1208 18:00:53.668091 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.256127 5113 patch_prober.go:28] interesting pod/machine-config-daemon-mf4d4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.256776 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.256834 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.257512 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b774cdd68266b83e8cd6eb707785fad3a39cb5fbfd46ce5927fadc5a78e9b66b"} pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.257578 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" podUID="52658507-b084-49cb-a694-f012d44ccc82" containerName="machine-config-daemon" containerID="cri-o://b774cdd68266b83e8cd6eb707785fad3a39cb5fbfd46ce5927fadc5a78e9b66b" gracePeriod=600 Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.919334 5113 generic.go:358] "Generic (PLEG): container finished" podID="52658507-b084-49cb-a694-f012d44ccc82" containerID="b774cdd68266b83e8cd6eb707785fad3a39cb5fbfd46ce5927fadc5a78e9b66b" exitCode=0 Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.919426 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerDied","Data":"b774cdd68266b83e8cd6eb707785fad3a39cb5fbfd46ce5927fadc5a78e9b66b"} Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.920548 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mf4d4" event={"ID":"52658507-b084-49cb-a694-f012d44ccc82","Type":"ContainerStarted","Data":"5ed9509021a813d3830302c7974796519ac7d6a56a7557877d8a637dcecbe282"} Dec 08 18:01:23 crc kubenswrapper[5113]: I1208 18:01:23.920716 5113 scope.go:117] "RemoveContainer" containerID="13d2c1fe38ff6a7a0cac1ade14681ccc0e31e7fbc1ba06630b2782faab18303e"